Dec 06 03:05:45 crc systemd[1]: Starting Kubernetes Kubelet... Dec 06 03:05:45 crc restorecon[4750]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:45 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 03:05:46 crc restorecon[4750]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 06 03:05:47 crc kubenswrapper[4801]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 03:05:47 crc kubenswrapper[4801]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 06 03:05:47 crc kubenswrapper[4801]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 03:05:47 crc kubenswrapper[4801]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 03:05:47 crc kubenswrapper[4801]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 06 03:05:47 crc kubenswrapper[4801]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.006940 4801 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010719 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010742 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010748 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010764 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010799 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010808 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010814 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010820 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010825 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010829 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010833 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010837 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010841 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010845 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010848 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010852 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010856 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010860 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010865 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010869 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010873 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010878 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010883 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010888 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010893 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010900 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010907 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010913 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010918 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010923 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010928 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010933 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010937 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010941 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010945 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010949 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010953 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010957 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010961 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010966 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010970 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010973 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010977 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010982 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010986 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010989 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010993 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.010997 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011004 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011008 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011013 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011016 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011020 4801 feature_gate.go:330] unrecognized feature gate: Example Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011023 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011027 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011030 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011033 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011037 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011041 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011044 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011047 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011051 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011054 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011058 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011061 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011065 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011068 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011071 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011074 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011079 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.011083 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011327 4801 flags.go:64] FLAG: --address="0.0.0.0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011340 4801 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011349 4801 flags.go:64] FLAG: --anonymous-auth="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011356 4801 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011365 4801 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011370 4801 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011377 4801 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011384 4801 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011389 4801 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011400 4801 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011406 4801 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011411 4801 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011416 4801 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011422 4801 flags.go:64] FLAG: --cgroup-root="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011427 4801 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011433 4801 flags.go:64] FLAG: --client-ca-file="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011438 4801 flags.go:64] FLAG: --cloud-config="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011442 4801 flags.go:64] FLAG: --cloud-provider="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011447 4801 flags.go:64] FLAG: --cluster-dns="[]" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011452 4801 flags.go:64] FLAG: --cluster-domain="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011456 4801 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011460 4801 flags.go:64] FLAG: --config-dir="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011465 4801 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011469 4801 flags.go:64] FLAG: --container-log-max-files="5" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011476 4801 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011480 4801 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011485 4801 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011489 4801 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011494 4801 flags.go:64] FLAG: --contention-profiling="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011498 4801 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011502 4801 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011506 4801 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011510 4801 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011515 4801 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011519 4801 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011524 4801 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011528 4801 flags.go:64] FLAG: --enable-load-reader="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011532 4801 flags.go:64] FLAG: --enable-server="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011537 4801 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011543 4801 flags.go:64] FLAG: --event-burst="100" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011548 4801 flags.go:64] FLAG: --event-qps="50" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011555 4801 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011560 4801 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011564 4801 flags.go:64] FLAG: --eviction-hard="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011569 4801 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011574 4801 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011578 4801 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011583 4801 flags.go:64] FLAG: --eviction-soft="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011586 4801 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011591 4801 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011595 4801 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011600 4801 flags.go:64] FLAG: --experimental-mounter-path="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011603 4801 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011608 4801 flags.go:64] FLAG: --fail-swap-on="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011611 4801 flags.go:64] FLAG: --feature-gates="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011616 4801 flags.go:64] FLAG: --file-check-frequency="20s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011621 4801 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011625 4801 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011629 4801 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011634 4801 flags.go:64] FLAG: --healthz-port="10248" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011638 4801 flags.go:64] FLAG: --help="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011642 4801 flags.go:64] FLAG: --hostname-override="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011646 4801 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011649 4801 flags.go:64] FLAG: --http-check-frequency="20s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011654 4801 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011657 4801 flags.go:64] FLAG: --image-credential-provider-config="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011661 4801 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011665 4801 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011669 4801 flags.go:64] FLAG: --image-service-endpoint="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011673 4801 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011677 4801 flags.go:64] FLAG: --kube-api-burst="100" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011681 4801 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011685 4801 flags.go:64] FLAG: --kube-api-qps="50" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011690 4801 flags.go:64] FLAG: --kube-reserved="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011694 4801 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011698 4801 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011703 4801 flags.go:64] FLAG: --kubelet-cgroups="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011709 4801 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011713 4801 flags.go:64] FLAG: --lock-file="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011717 4801 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011721 4801 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011725 4801 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011734 4801 flags.go:64] FLAG: --log-json-split-stream="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011739 4801 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011743 4801 flags.go:64] FLAG: --log-text-split-stream="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011748 4801 flags.go:64] FLAG: --logging-format="text" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011759 4801 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011765 4801 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011789 4801 flags.go:64] FLAG: --manifest-url="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011794 4801 flags.go:64] FLAG: --manifest-url-header="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011801 4801 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011806 4801 flags.go:64] FLAG: --max-open-files="1000000" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011813 4801 flags.go:64] FLAG: --max-pods="110" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011818 4801 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011823 4801 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011828 4801 flags.go:64] FLAG: --memory-manager-policy="None" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011833 4801 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011838 4801 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011842 4801 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011846 4801 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011856 4801 flags.go:64] FLAG: --node-status-max-images="50" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011860 4801 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011864 4801 flags.go:64] FLAG: --oom-score-adj="-999" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011868 4801 flags.go:64] FLAG: --pod-cidr="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011872 4801 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011877 4801 flags.go:64] FLAG: --pod-manifest-path="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011883 4801 flags.go:64] FLAG: --pod-max-pids="-1" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011887 4801 flags.go:64] FLAG: --pods-per-core="0" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011891 4801 flags.go:64] FLAG: --port="10250" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011896 4801 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011900 4801 flags.go:64] FLAG: --provider-id="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011904 4801 flags.go:64] FLAG: --qos-reserved="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011909 4801 flags.go:64] FLAG: --read-only-port="10255" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011913 4801 flags.go:64] FLAG: --register-node="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011917 4801 flags.go:64] FLAG: --register-schedulable="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011921 4801 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011928 4801 flags.go:64] FLAG: --registry-burst="10" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011932 4801 flags.go:64] FLAG: --registry-qps="5" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011937 4801 flags.go:64] FLAG: --reserved-cpus="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011942 4801 flags.go:64] FLAG: --reserved-memory="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011948 4801 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011953 4801 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011958 4801 flags.go:64] FLAG: --rotate-certificates="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011963 4801 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011968 4801 flags.go:64] FLAG: --runonce="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011973 4801 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011977 4801 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011982 4801 flags.go:64] FLAG: --seccomp-default="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011987 4801 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011992 4801 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.011997 4801 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012002 4801 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012008 4801 flags.go:64] FLAG: --storage-driver-password="root" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012014 4801 flags.go:64] FLAG: --storage-driver-secure="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012018 4801 flags.go:64] FLAG: --storage-driver-table="stats" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012022 4801 flags.go:64] FLAG: --storage-driver-user="root" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012027 4801 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012031 4801 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012037 4801 flags.go:64] FLAG: --system-cgroups="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012041 4801 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012048 4801 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012052 4801 flags.go:64] FLAG: --tls-cert-file="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012056 4801 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012061 4801 flags.go:64] FLAG: --tls-min-version="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012065 4801 flags.go:64] FLAG: --tls-private-key-file="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012069 4801 flags.go:64] FLAG: --topology-manager-policy="none" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012074 4801 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012078 4801 flags.go:64] FLAG: --topology-manager-scope="container" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012083 4801 flags.go:64] FLAG: --v="2" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012088 4801 flags.go:64] FLAG: --version="false" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012094 4801 flags.go:64] FLAG: --vmodule="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012099 4801 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.012103 4801 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012720 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012798 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012808 4801 feature_gate.go:330] unrecognized feature gate: Example Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012813 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012818 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012835 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012840 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012845 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012849 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012853 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012857 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012862 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012866 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012870 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012874 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012878 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012886 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012896 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012907 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012914 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012921 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012925 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012929 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012933 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012937 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012942 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012946 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012950 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012954 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012957 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012966 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012970 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012974 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012980 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012984 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012988 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.012996 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013001 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013006 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013010 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013017 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013023 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013031 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013035 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013040 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013046 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013050 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013054 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013060 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013065 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013069 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013074 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013078 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013082 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013086 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013093 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013097 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013101 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013104 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013109 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013113 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013116 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013120 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013124 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013128 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013136 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013143 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013151 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013155 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013158 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.013162 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.013173 4801 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.025853 4801 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.025890 4801 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026004 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026018 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026029 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026039 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026048 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026056 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026064 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026072 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026080 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026089 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026096 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026104 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026111 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026121 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026130 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026137 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026145 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026155 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026162 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026170 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026181 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026190 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026199 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026207 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026215 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026223 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026231 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026238 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026247 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026254 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026263 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026273 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026282 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026290 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026298 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026307 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026315 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026326 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026337 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026345 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026355 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026363 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026371 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026380 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026390 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026400 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026407 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026415 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026423 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026430 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026438 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026446 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026453 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026461 4801 feature_gate.go:330] unrecognized feature gate: Example Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026468 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026476 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026484 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026491 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026499 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026507 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026515 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026523 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026531 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026539 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026546 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026554 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026562 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026572 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026582 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026590 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026600 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.026613 4801 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026875 4801 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026889 4801 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026898 4801 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026906 4801 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026915 4801 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026923 4801 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026931 4801 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026938 4801 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026946 4801 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026954 4801 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026961 4801 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026968 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026976 4801 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026985 4801 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.026993 4801 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027001 4801 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027011 4801 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027021 4801 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027030 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027039 4801 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027049 4801 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027059 4801 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027068 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027076 4801 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027087 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027096 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027103 4801 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027112 4801 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027120 4801 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027130 4801 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027141 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027151 4801 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027159 4801 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027167 4801 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027175 4801 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027183 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027191 4801 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027200 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027207 4801 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027215 4801 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027223 4801 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027231 4801 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027238 4801 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027246 4801 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027253 4801 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027261 4801 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027269 4801 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027285 4801 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027292 4801 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027300 4801 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027307 4801 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027318 4801 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027327 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027335 4801 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027343 4801 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027350 4801 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027358 4801 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027366 4801 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027373 4801 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027381 4801 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027389 4801 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027397 4801 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027404 4801 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027412 4801 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027420 4801 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027428 4801 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027435 4801 feature_gate.go:330] unrecognized feature gate: Example Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027443 4801 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027450 4801 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027458 4801 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.027466 4801 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.027477 4801 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.027712 4801 server.go:940] "Client rotation is on, will bootstrap in background" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.032273 4801 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.032402 4801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.033359 4801 server.go:997] "Starting client certificate rotation" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.033399 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.033640 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-20 07:01:46.833649871 +0000 UTC Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.033817 4801 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 339h55m59.79983734s for next certificate rotation Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.044295 4801 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.047750 4801 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.057983 4801 log.go:25] "Validated CRI v1 runtime API" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.082085 4801 log.go:25] "Validated CRI v1 image API" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.084112 4801 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.087600 4801 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-06-03-00-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.087646 4801 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.110280 4801 manager.go:217] Machine: {Timestamp:2025-12-06 03:05:47.10861253 +0000 UTC m=+0.231220132 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0b4685bb-fe54-4148-bf81-6b341147ef19 BootID:abfd5160-9396-4d4e-928e-b20d5ecf73f0 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e1:0b:2a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e1:0b:2a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:03:e4:2f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f7:5b:47 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:16:e6:a8 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:90:5d:15 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:5c:1d:52 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6a:84:62:d1:b1:75 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:82:27:71:35:65 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.110679 4801 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.110936 4801 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.111767 4801 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.112031 4801 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.112184 4801 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.112522 4801 topology_manager.go:138] "Creating topology manager with none policy" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.112539 4801 container_manager_linux.go:303] "Creating device plugin manager" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.112851 4801 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.112898 4801 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.113218 4801 state_mem.go:36] "Initialized new in-memory state store" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.113363 4801 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.114165 4801 kubelet.go:418] "Attempting to sync node with API server" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.114195 4801 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.114229 4801 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.114249 4801 kubelet.go:324] "Adding apiserver pod source" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.114266 4801 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.116815 4801 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.117516 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.117561 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.117695 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.117704 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.117595 4801 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.119065 4801 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.119915 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.119958 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.119974 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.119998 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120023 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120039 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120053 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120083 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120105 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120124 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120188 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120202 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.120472 4801 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.121218 4801 server.go:1280] "Started kubelet" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.121854 4801 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.122021 4801 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.123541 4801 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 06 03:05:47 crc systemd[1]: Started Kubernetes Kubelet. Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.123837 4801 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.123874 4801 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.124320 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.125004 4801 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.125049 4801 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.124928 4801 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:00:12.39325101 +0000 UTC Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.133442 4801 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 270h54m25.259827419s for next certificate rotation Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.126426 4801 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.126538 4801 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.134281 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.134459 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.125685 4801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e815e90266815 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 03:05:47.121158165 +0000 UTC m=+0.243765787,LastTimestamp:2025-12-06 03:05:47.121158165 +0000 UTC m=+0.243765787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.135723 4801 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.135761 4801 factory.go:55] Registering systemd factory Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.135795 4801 factory.go:221] Registration of the systemd container factory successfully Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.135949 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="200ms" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.136059 4801 server.go:460] "Adding debug handlers to kubelet server" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.145872 4801 factory.go:153] Registering CRI-O factory Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.145913 4801 factory.go:221] Registration of the crio container factory successfully Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.145949 4801 factory.go:103] Registering Raw factory Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.145970 4801 manager.go:1196] Started watching for new ooms in manager Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.146575 4801 manager.go:319] Starting recovery of all containers Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148642 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148768 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148829 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148855 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148882 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148907 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148934 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148966 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.148999 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149056 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149085 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149113 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149161 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149191 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149221 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149245 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149275 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149300 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149320 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149341 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149363 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149384 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149405 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149426 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149448 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149477 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149499 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149520 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149553 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149579 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149602 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149623 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149721 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149764 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149874 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149900 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149925 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149952 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.149978 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150006 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150033 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150059 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150085 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150112 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150376 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150398 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150419 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150439 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150460 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150484 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150508 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150531 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150569 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150599 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150628 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150657 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150767 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150837 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150864 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150888 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150909 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150928 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150950 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150970 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.150989 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151012 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151033 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151053 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151073 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151092 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151112 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151132 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151151 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151171 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151191 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151214 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151264 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151283 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151304 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151323 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151343 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151367 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151387 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151408 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151427 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151446 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151466 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151486 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151505 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151525 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151545 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151564 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151585 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151605 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151624 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151643 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151663 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151685 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151707 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151733 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151760 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151814 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151834 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151859 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151890 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151912 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151934 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151957 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.151979 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152000 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152025 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152046 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152066 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152089 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152110 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152129 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152150 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152170 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152190 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152210 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152230 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152255 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152275 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152298 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152319 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152338 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152357 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152377 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152397 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152419 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152438 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152456 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152473 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152491 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152510 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152529 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152559 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152579 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152600 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152619 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152638 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152657 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152676 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152714 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152742 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152804 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152825 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152842 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152867 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152886 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152905 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152923 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152941 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152979 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.152999 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153018 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153039 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153060 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153078 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153111 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153177 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153199 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153219 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153252 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153272 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153291 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153310 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153328 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153352 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153373 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153393 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153416 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153438 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153458 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153478 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153508 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153528 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153552 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153573 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153593 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153613 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153632 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153660 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153685 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153709 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153737 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.153811 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155244 4801 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155330 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155356 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155376 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155401 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155420 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155442 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155461 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155480 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155498 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155519 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155539 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155561 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155581 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155604 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155624 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155645 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155664 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155684 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155704 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155722 4801 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155741 4801 reconstruct.go:97] "Volume reconstruction finished" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.155755 4801 reconciler.go:26] "Reconciler: start to sync state" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.164072 4801 manager.go:324] Recovery completed Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.180211 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.182560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.182840 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.182857 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.184253 4801 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.184296 4801 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.184455 4801 state_mem.go:36] "Initialized new in-memory state store" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.197310 4801 policy_none.go:49] "None policy: Start" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.200360 4801 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.200433 4801 state_mem.go:35] "Initializing new in-memory state store" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.209661 4801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.210984 4801 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.211032 4801 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.211072 4801 kubelet.go:2335] "Starting kubelet main sync loop" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.211676 4801 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.211923 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.212076 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.234142 4801 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.258565 4801 manager.go:334] "Starting Device Plugin manager" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.259901 4801 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.259948 4801 server.go:79] "Starting device plugin registration server" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.260466 4801 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.260489 4801 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.260789 4801 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.260974 4801 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.260986 4801 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.269170 4801 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.312989 4801 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.313213 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.315055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.315123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.315145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.315412 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.315671 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.315719 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.316810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.316891 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.316920 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.316934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.316898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.316985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317047 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317126 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317170 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317651 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317697 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317710 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317955 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317964 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.318045 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.318071 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.317970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.318887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.318931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.318952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.319241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.319269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.319279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.319377 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.319498 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.319534 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320202 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320248 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320265 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320471 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320510 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.320987 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.321010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.321270 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.321293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.321302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.337533 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="400ms" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359129 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359168 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359210 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359229 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359313 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359429 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359480 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359532 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359568 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359603 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359634 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359699 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359730 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.359807 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.361086 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.362956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.363012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.363028 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.363062 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.363599 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.461782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.461882 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.461914 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.461954 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.461973 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.461992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462026 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462046 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462068 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462079 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462132 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462104 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462180 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462163 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462242 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462244 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462281 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462263 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462304 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462331 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462212 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462062 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462351 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462393 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462476 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462508 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462524 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.462403 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.563882 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.565600 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.565661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.565674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.565714 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.566457 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.641697 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.650485 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.666123 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.678937 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d142f999b8f6163f3207a0960107510c874de689c4ff0b1b627d43ff8b90598e WatchSource:0}: Error finding container d142f999b8f6163f3207a0960107510c874de689c4ff0b1b627d43ff8b90598e: Status 404 returned error can't find the container with id d142f999b8f6163f3207a0960107510c874de689c4ff0b1b627d43ff8b90598e Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.679318 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7303ddcd653fcb402b3ca26c7b95eddaf6c3f117bb432fb073455d28cd470c20 WatchSource:0}: Error finding container 7303ddcd653fcb402b3ca26c7b95eddaf6c3f117bb432fb073455d28cd470c20: Status 404 returned error can't find the container with id 7303ddcd653fcb402b3ca26c7b95eddaf6c3f117bb432fb073455d28cd470c20 Dec 06 03:05:47 crc kubenswrapper[4801]: W1206 03:05:47.685487 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-10c2aa5d5f0929bd8a611c926d08450fdf50193558b562a6ef3cf7d0127779dd WatchSource:0}: Error finding container 10c2aa5d5f0929bd8a611c926d08450fdf50193558b562a6ef3cf7d0127779dd: Status 404 returned error can't find the container with id 10c2aa5d5f0929bd8a611c926d08450fdf50193558b562a6ef3cf7d0127779dd Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.691820 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.696955 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.738419 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="800ms" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.966799 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.968809 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.968882 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.968903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:47 crc kubenswrapper[4801]: I1206 03:05:47.968946 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 03:05:47 crc kubenswrapper[4801]: E1206 03:05:47.969603 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.17:6443: connect: connection refused" node="crc" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.125314 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.220128 4801 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78" exitCode=0 Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.220251 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.220511 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d142f999b8f6163f3207a0960107510c874de689c4ff0b1b627d43ff8b90598e"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.220748 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.223052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.223091 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.223108 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:48 crc kubenswrapper[4801]: W1206 03:05:48.223525 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:48 crc kubenswrapper[4801]: E1206 03:05:48.223707 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.225144 4801 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c744cb5733f92c5da5b8e294bf1fe18e7dad93bbfbf3ad84dd493fe13a40bfd1" exitCode=0 Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.225217 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c744cb5733f92c5da5b8e294bf1fe18e7dad93bbfbf3ad84dd493fe13a40bfd1"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.225253 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1c93e108c3e8a768db6fabc55958b3c47a81438bb156c40f6efb9948aa2da55c"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.225334 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.226290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.226313 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.226322 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.228826 4801 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927" exitCode=0 Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.228911 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.228958 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a76309cf6facfef259b2f0ee02d8fb10de599a72784b0c8886322b7736db1a17"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.230151 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.230178 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"10c2aa5d5f0929bd8a611c926d08450fdf50193558b562a6ef3cf7d0127779dd"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.230333 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231312 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231340 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231867 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270" exitCode=0 Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231893 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231911 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7303ddcd653fcb402b3ca26c7b95eddaf6c3f117bb432fb073455d28cd470c20"} Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.231997 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.233417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.233451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.233460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.236026 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.236747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.236781 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.236791 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:48 crc kubenswrapper[4801]: W1206 03:05:48.272265 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:48 crc kubenswrapper[4801]: E1206 03:05:48.272380 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:48 crc kubenswrapper[4801]: W1206 03:05:48.360088 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:48 crc kubenswrapper[4801]: E1206 03:05:48.360188 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:48 crc kubenswrapper[4801]: W1206 03:05:48.370132 4801 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.17:6443: connect: connection refused Dec 06 03:05:48 crc kubenswrapper[4801]: E1206 03:05:48.370283 4801 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.17:6443: connect: connection refused" logger="UnhandledError" Dec 06 03:05:48 crc kubenswrapper[4801]: E1206 03:05:48.539882 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="1.6s" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.770674 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.776621 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.776675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.776685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:48 crc kubenswrapper[4801]: I1206 03:05:48.776715 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.236918 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cce160b311f3f235e8920ddf568101fde66fb469405479f12d8454ec00883399"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.237071 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.238033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.238081 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.238092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.239614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.239680 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.239694 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.239794 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.240454 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.240489 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.240500 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.241742 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.241737 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.241808 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.241821 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.242259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.242311 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.242320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.243950 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.243970 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.243981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.243991 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.245304 4801 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0" exitCode=0 Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.245357 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0"} Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.245530 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.246457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.246479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:49 crc kubenswrapper[4801]: I1206 03:05:49.246494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.253309 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3"} Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.253507 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.254912 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.254956 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.254971 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.257962 4801 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e" exitCode=0 Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.258076 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.258480 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e"} Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.258586 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.259905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.259988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.260003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.260501 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.260551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:50 crc kubenswrapper[4801]: I1206 03:05:50.260572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.265892 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344"} Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.265951 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f"} Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.265961 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e"} Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.265970 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd"} Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.265997 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.266081 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.266996 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.267038 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.267048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.599727 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.600024 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.601633 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.601689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.601711 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.605521 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:51 crc kubenswrapper[4801]: I1206 03:05:51.966815 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.274671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2"} Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.274865 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.274866 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.274866 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.276016 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.276048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.276062 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.277228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.277256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.277268 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.277304 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.277356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.277372 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:52 crc kubenswrapper[4801]: I1206 03:05:52.326923 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.277458 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.277527 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.277532 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.279331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.279382 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.279403 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.279467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.279512 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.279531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.281290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.281344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.281360 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.394816 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 06 03:05:53 crc kubenswrapper[4801]: I1206 03:05:53.871596 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.280306 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.280321 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.281830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.281867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.281894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.281905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.281925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.281910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.673618 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.673896 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.675381 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.675436 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:54 crc kubenswrapper[4801]: I1206 03:05:54.675447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:56 crc kubenswrapper[4801]: I1206 03:05:56.795369 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:56 crc kubenswrapper[4801]: I1206 03:05:56.795620 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:56 crc kubenswrapper[4801]: I1206 03:05:56.797282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:56 crc kubenswrapper[4801]: I1206 03:05:56.797334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:56 crc kubenswrapper[4801]: I1206 03:05:56.797348 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:57 crc kubenswrapper[4801]: E1206 03:05:57.269360 4801 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.406847 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.407067 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.408594 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.408644 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.408655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.502158 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.502415 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.504254 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.504321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.504347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:58 crc kubenswrapper[4801]: I1206 03:05:58.508559 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:05:58 crc kubenswrapper[4801]: E1206 03:05:58.777794 4801 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.126443 4801 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.293488 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.294751 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.294825 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.294845 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.495228 4801 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.495327 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.500110 4801 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.500241 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.796490 4801 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 03:05:59 crc kubenswrapper[4801]: I1206 03:05:59.796639 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 03:06:00 crc kubenswrapper[4801]: I1206 03:06:00.377967 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:06:00 crc kubenswrapper[4801]: I1206 03:06:00.379685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:00 crc kubenswrapper[4801]: I1206 03:06:00.379745 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:00 crc kubenswrapper[4801]: I1206 03:06:00.379811 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:00 crc kubenswrapper[4801]: I1206 03:06:00.379848 4801 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.879414 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.879689 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.880471 4801 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.880605 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.881473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.881527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.881547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:03 crc kubenswrapper[4801]: I1206 03:06:03.885871 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.312078 4801 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.312874 4801 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.313101 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.313354 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.313437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.313464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:04 crc kubenswrapper[4801]: E1206 03:06:04.482477 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.488135 4801 trace.go:236] Trace[1449567556]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 03:05:51.382) (total time: 13105ms): Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[1449567556]: ---"Objects listed" error: 13105ms (03:06:04.487) Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[1449567556]: [13.105321486s] [13.105321486s] END Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.488217 4801 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.488905 4801 trace.go:236] Trace[671234599]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 03:05:50.040) (total time: 14448ms): Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[671234599]: ---"Objects listed" error: 14448ms (03:06:04.488) Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[671234599]: [14.448251899s] [14.448251899s] END Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.488936 4801 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.491743 4801 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.492863 4801 trace.go:236] Trace[1978013305]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 03:05:50.024) (total time: 14466ms): Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[1978013305]: ---"Objects listed" error: 14466ms (03:06:04.491) Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[1978013305]: [14.466802857s] [14.466802857s] END Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.492938 4801 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.492916 4801 trace.go:236] Trace[755374236]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 03:05:50.003) (total time: 14488ms): Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[755374236]: ---"Objects listed" error: 14488ms (03:06:04.492) Dec 06 03:06:04 crc kubenswrapper[4801]: Trace[755374236]: [14.4888658s] [14.4888658s] END Dec 06 03:06:04 crc kubenswrapper[4801]: I1206 03:06:04.493089 4801 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.125370 4801 apiserver.go:52] "Watching apiserver" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.128302 4801 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.128675 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.129289 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.129350 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.129292 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.129627 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.129684 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.129788 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.129835 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.130048 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.130120 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.132482 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.132522 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.133001 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.133129 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.133889 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.133990 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.134020 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.133995 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.134925 4801 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.137389 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.172604 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.186150 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196253 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196298 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196321 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196342 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196366 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196394 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196417 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196435 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196450 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196464 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196481 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196498 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196513 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196532 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196550 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196565 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196588 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196607 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196626 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196683 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196703 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196724 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196791 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196815 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196834 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196864 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196883 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196901 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196918 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196940 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196956 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196976 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.196999 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197016 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197033 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197051 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197071 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197091 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197106 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197123 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197144 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197164 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197183 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197206 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197225 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197249 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197271 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197294 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197329 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197352 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197374 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197396 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197418 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197449 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197472 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197492 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197513 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197533 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197553 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197584 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197610 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197637 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197660 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197686 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197714 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197738 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197781 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197806 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197838 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197861 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197886 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197910 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197935 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197960 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.197985 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198009 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198033 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198060 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198081 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198103 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198150 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198173 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198195 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198216 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198236 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198261 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198311 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198332 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198355 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198380 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198405 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198403 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198431 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198541 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198571 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198592 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198623 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198647 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198672 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198729 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198812 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198820 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198835 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198857 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198881 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198908 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198928 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198949 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198938 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.198971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199107 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199156 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199200 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199206 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199247 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199290 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199330 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199366 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199404 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199445 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199503 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199569 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199621 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199658 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199696 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199731 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199793 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199830 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199865 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199937 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.199976 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200012 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200054 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200093 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200133 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200171 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200169 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200216 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200259 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200294 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200467 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200472 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200518 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200569 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200551 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.200620 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:06:05.700578484 +0000 UTC m=+18.823186096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200677 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200728 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200738 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200744 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200847 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.200970 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201078 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201122 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201163 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201210 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201250 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201331 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201369 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201409 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201447 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201483 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201521 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202042 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202080 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202112 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202151 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202235 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202274 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202308 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202336 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202372 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202420 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202457 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202489 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202659 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202732 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202817 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202900 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202984 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203033 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203106 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203172 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203209 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203278 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203358 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203541 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203652 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203695 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203732 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203793 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203836 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203867 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203938 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204008 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204046 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204085 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204214 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204264 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204310 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204359 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204415 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204457 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204600 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204646 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204692 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204816 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204854 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204913 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204989 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205016 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205038 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205058 4801 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205077 4801 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205094 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205113 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205132 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205152 4801 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205789 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201093 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201237 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201247 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201519 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.201728 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202063 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202385 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202577 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202696 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.202906 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203121 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203400 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203660 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210938 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210995 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204043 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204049 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204278 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204315 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204390 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204385 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204417 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204444 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204699 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204918 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.204933 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205114 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205175 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205218 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205279 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205305 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205325 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205435 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205466 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205493 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205541 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205794 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.205468 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.206205 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.206872 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.207055 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.207143 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.207476 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.207644 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.211428 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.207682 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.207994 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208009 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208261 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208290 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208298 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.211490 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208443 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208462 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208638 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208702 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.208951 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.209125 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.209699 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.209819 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.209854 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.209904 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210040 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210057 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210323 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210563 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210590 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.210633 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.211595 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.211867 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212179 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212293 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212320 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212326 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212572 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212822 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.212863 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.213706 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214113 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.203986 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214140 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214147 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214578 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214672 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214721 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.214933 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.215073 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.215117 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.215185 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.215200 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:05.715180046 +0000 UTC m=+18.837787638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.215217 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.215236 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.215533 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.215566 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.215597 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.215680 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:05.715635008 +0000 UTC m=+18.838242820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216020 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216103 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216303 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216386 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216424 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216571 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.216380 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.217134 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.217425 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.217472 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.217595 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.217697 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.217981 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.219110 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.220406 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.220465 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.221017 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.221914 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.221944 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.222377 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.222460 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.222526 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.222640 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.223383 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.223539 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.223489 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.224252 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.224557 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.225221 4801 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.229454 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.229622 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.229922 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.229942 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.229970 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.230730 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.235396 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.235488 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.235570 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.235598 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.235610 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.235910 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:05.735887323 +0000 UTC m=+18.858495095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.236190 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.235912 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.236580 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.236900 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.236957 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.236972 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.238090 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.238140 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.238163 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.238243 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:05.738213566 +0000 UTC m=+18.860821318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.238648 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.240694 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.240831 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.241174 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.241363 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.243154 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.249679 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.249847 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.249999 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.250925 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.253848 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.255200 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.255549 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.255603 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.256902 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.256945 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.257048 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.257164 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.257231 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258006 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.257860 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258054 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258092 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258401 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258657 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258438 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258429 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258736 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258777 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.259301 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.258956 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.259377 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.259461 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.260227 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.260274 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.260608 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.260735 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.261781 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.261919 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.261966 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.262524 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.263335 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.264420 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.264967 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265004 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265078 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265308 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265345 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265385 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265699 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.265933 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.266342 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.266371 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.267082 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.267165 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.269353 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.270169 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.270280 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.270479 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.272685 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.274109 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.274183 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.276264 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.279705 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.280291 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.281173 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.283298 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.284302 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.285736 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.287716 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.289032 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.290512 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.290881 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.291944 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.293065 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.295371 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.296424 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.296448 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.298089 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.298746 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.299728 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.300073 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.301385 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.302088 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.303477 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.304269 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.305560 4801 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.305834 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.306854 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.306934 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.306982 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307144 4801 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307194 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307161 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307227 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307252 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307274 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307294 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307315 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307334 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307371 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307394 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307409 4801 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307424 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307442 4801 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307456 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307470 4801 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307484 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307498 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307513 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307526 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307539 4801 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307552 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307566 4801 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307579 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307592 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307606 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307619 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307631 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307644 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307659 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307672 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307686 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307699 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307712 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307726 4801 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307739 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307771 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307786 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307800 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307814 4801 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307827 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307841 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307856 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307871 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307885 4801 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307898 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307911 4801 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307924 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307939 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307953 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307968 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307983 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.307996 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308010 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308025 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308039 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308052 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308064 4801 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308078 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308091 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308104 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308116 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308129 4801 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308142 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308156 4801 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308169 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308317 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308333 4801 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308346 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308361 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308375 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308388 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308408 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308425 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308440 4801 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308454 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308468 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308483 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308497 4801 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308503 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308511 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308551 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308564 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308576 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308588 4801 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308601 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308614 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308625 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308638 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308651 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308664 4801 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308677 4801 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308690 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308702 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308716 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308774 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308792 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308805 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308818 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308830 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308842 4801 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308855 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308869 4801 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308881 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308894 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308906 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308918 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308930 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308944 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308956 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.308968 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309044 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309062 4801 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309075 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309089 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309104 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309126 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309139 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309153 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309167 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309181 4801 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309196 4801 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309210 4801 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309223 4801 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309236 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309250 4801 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309263 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309277 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309290 4801 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309303 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309316 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309330 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309344 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309356 4801 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309367 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309379 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309391 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309404 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309418 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309433 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309444 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309457 4801 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309469 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309482 4801 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309495 4801 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309507 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309521 4801 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309534 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309547 4801 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309561 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309574 4801 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309587 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309601 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309612 4801 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309624 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309638 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309651 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309663 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309676 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309687 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309700 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309713 4801 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309717 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309725 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309738 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309751 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309782 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309794 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309807 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309820 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309832 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309844 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309858 4801 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309870 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309882 4801 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309894 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309906 4801 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309918 4801 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309931 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309942 4801 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309956 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309969 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309982 4801 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.309994 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.310005 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.310017 4801 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.310028 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.310039 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.310161 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.311711 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.312382 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.313972 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.314987 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.316116 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.316671 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.318407 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3" exitCode=255 Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.318892 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.319869 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.321172 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.322273 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.322800 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.323766 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.324298 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.325450 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.325959 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.326914 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.327405 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.327929 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.328903 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.329410 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.329928 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.340146 4801 scope.go:117] "RemoveContainer" containerID="b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.340848 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.343165 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.376428 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.409133 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.417386 4801 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.417781 4801 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.419954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.420004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.420019 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.420044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.420061 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.427101 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.438175 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.438438 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.445414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.445438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.445447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.445467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.445478 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.454306 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.459549 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.465312 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.467281 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.471572 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.472105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.472135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.472144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.472160 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.472175 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.489571 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.497875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.497922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.497936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.497959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.497976 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.508716 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.513276 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.513347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.513374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.513394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.513406 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.528608 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.528772 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.532204 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.532271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.532282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.532304 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.532315 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.636349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.636878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.636898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.636925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.636940 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.714339 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.714618 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:06:06.714571391 +0000 UTC m=+19.837178963 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.739740 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.739790 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.739800 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.739820 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.739836 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.815783 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.816045 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.816079 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.816109 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816232 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816298 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:06.816281687 +0000 UTC m=+19.938889269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816833 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816866 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816886 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816920 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:06.816910124 +0000 UTC m=+19.939517706 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.816981 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.817010 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:06.817002576 +0000 UTC m=+19.939610158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.817063 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.817076 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.817086 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: E1206 03:06:05.817114 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:06.817105119 +0000 UTC m=+19.939712701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.844379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.844413 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.844425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.844442 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.844454 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.946889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.946919 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.946928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.946943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:05 crc kubenswrapper[4801]: I1206 03:06:05.946953 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:05Z","lastTransitionTime":"2025-12-06T03:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.050333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.050373 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.050385 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.050402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.050414 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.157686 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.157806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.157832 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.157861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.157888 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.211546 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.211831 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.261231 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.261276 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.261286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.261303 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.261316 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.323693 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.323777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"516490d339dbe4951682139cd44f45fd067809018760da8362ab41d7112b9b54"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.326366 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.328400 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.329058 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.331167 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.331196 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.331208 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7e717fff890221be5e036a4ba2bd8ed37cee6edc8db94112924e9e03c5142f4d"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.332582 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"52d2234f28b8c3bd83c9c7359dc1051c4af9c0eb1889857b6b397ea051ae6349"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.365037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.365085 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.365098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.365120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.365133 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.380939 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.430066 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.445195 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.460518 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.468115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.468152 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.468162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.468179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.468190 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.479167 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.493719 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.512343 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.526508 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.537367 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.553237 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.570865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.570897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.570907 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.570925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.570938 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.573655 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.586568 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.603297 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.618867 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.674402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.674460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.674477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.674499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.674511 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.723804 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.724032 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:06:08.723997217 +0000 UTC m=+21.846604789 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.777093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.777144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.777157 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.777180 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.777196 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.802718 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.809685 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.818808 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.824989 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.825038 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.825064 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.825090 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825167 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825186 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825202 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825214 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825222 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825238 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:08.825219859 +0000 UTC m=+21.947827431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825370 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:08.825332422 +0000 UTC m=+21.947939994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825393 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:08.825380344 +0000 UTC m=+21.947987916 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825432 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825447 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825461 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:06 crc kubenswrapper[4801]: E1206 03:06:06.825524 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:08.825516507 +0000 UTC m=+21.948124079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.837379 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.857403 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.877903 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.880258 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.880302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.880314 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.880336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.880349 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.890314 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.905405 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.919933 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.937900 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.952598 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.973465 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.983116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.983166 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.983181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.983202 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.983215 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:06Z","lastTransitionTime":"2025-12-06T03:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:06 crc kubenswrapper[4801]: I1206 03:06:06.989322 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.012783 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.027342 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.045022 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.064405 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.086164 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.086227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.086244 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.086269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.086286 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.088452 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.189395 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.189459 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.189474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.189498 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.189516 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.211519 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.211626 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:07 crc kubenswrapper[4801]: E1206 03:06:07.211697 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:07 crc kubenswrapper[4801]: E1206 03:06:07.211827 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.223497 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.224472 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.225856 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.226605 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.231838 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.245543 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.259476 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.285847 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.291809 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.291871 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.291900 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.291924 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.291939 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.307035 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.333257 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.348350 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.368618 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.395154 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.395210 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.395229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.395258 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.395279 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.498823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.498879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.498950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.498975 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.498991 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.604158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.604251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.604273 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.604341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.604361 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.707157 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.707201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.707214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.707235 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.707250 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.811072 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.811135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.811156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.811186 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.811207 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.914117 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.914169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.914182 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.914203 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:07 crc kubenswrapper[4801]: I1206 03:06:07.914217 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:07Z","lastTransitionTime":"2025-12-06T03:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.017783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.017848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.017865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.017892 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.017910 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.120611 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.120673 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.120692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.120716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.120735 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.212280 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.212504 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.223778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.223845 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.223863 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.223887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.223907 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.327066 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.327116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.327129 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.327150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.327162 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.430389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.430438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.430452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.430479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.430490 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.533285 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.533360 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.533379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.533408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.533428 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.629428 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.636949 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.637001 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.637015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.637041 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.637053 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.650596 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.652812 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.656874 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.669127 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.683702 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.698033 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.713731 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.734734 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.740088 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.740151 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.740170 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.740194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.740210 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.740571 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.740685 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:06:12.740660439 +0000 UTC m=+25.863268011 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.754054 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.768527 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.784151 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.802181 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.819721 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.841733 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.841877 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.841925 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.841966 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842038 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842103 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842122 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842200 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:12.84217821 +0000 UTC m=+25.964785782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842234 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842084 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842252 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842277 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842591 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842455 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:12.842404606 +0000 UTC m=+25.965012218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842671 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:12.842648202 +0000 UTC m=+25.965255974 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:08 crc kubenswrapper[4801]: E1206 03:06:08.842705 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:12.842691883 +0000 UTC m=+25.965299705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.844433 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.844484 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.844503 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.844527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.844544 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.845274 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.861595 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.878325 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.892652 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.909618 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.927475 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:08Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.947500 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.947548 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.947560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.947581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:08 crc kubenswrapper[4801]: I1206 03:06:08.947594 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:08Z","lastTransitionTime":"2025-12-06T03:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.050459 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.050498 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.050506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.050522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.050534 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.154200 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.154250 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.154262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.154287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.154300 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.212266 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.212309 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:09 crc kubenswrapper[4801]: E1206 03:06:09.212434 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:09 crc kubenswrapper[4801]: E1206 03:06:09.212607 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.257954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.258007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.258019 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.258039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.258053 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.342956 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.364221 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.364282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.364298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.364324 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.364336 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.367676 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.397889 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.434388 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.467542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.467588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.467598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.467617 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.467628 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.497731 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.524925 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.546997 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.562166 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.570045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.570113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.570132 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.570172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.570191 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.576070 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.603688 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:09Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.672588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.672631 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.672641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.672659 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.672669 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.775329 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.775367 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.775378 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.775395 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.775406 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.878288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.878356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.878374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.878400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.878419 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.980980 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.981030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.981039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.981057 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:09 crc kubenswrapper[4801]: I1206 03:06:09.981068 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:09Z","lastTransitionTime":"2025-12-06T03:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.083609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.083654 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.083666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.083688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.083698 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.186363 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.186411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.186419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.186435 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.186445 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.211997 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:10 crc kubenswrapper[4801]: E1206 03:06:10.212192 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.290010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.290052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.290063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.290082 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.290091 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.393732 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.394021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.394037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.394059 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.394077 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.497275 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.497333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.497350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.497375 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.497393 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.600932 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.600993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.601003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.601022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.601032 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.703859 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.704100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.704110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.704136 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.704149 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.806398 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.806458 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.806469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.806486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.806500 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.909181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.909226 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.909238 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.909256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.909269 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:10Z","lastTransitionTime":"2025-12-06T03:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.926691 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8qjvm"] Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.927554 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.928074 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-s2sg4"] Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.928293 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.928444 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4gxwt"] Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.928901 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4gxwt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.930450 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mjmtt"] Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.931346 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dcvff"] Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.932743 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.933022 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:10 crc kubenswrapper[4801]: W1206 03:06:10.935322 4801 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Dec 06 03:06:10 crc kubenswrapper[4801]: E1206 03:06:10.935391 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935340 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 03:06:10 crc kubenswrapper[4801]: W1206 03:06:10.935332 4801 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Dec 06 03:06:10 crc kubenswrapper[4801]: E1206 03:06:10.935471 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935602 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935658 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935681 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935742 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 03:06:10 crc kubenswrapper[4801]: W1206 03:06:10.935801 4801 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Dec 06 03:06:10 crc kubenswrapper[4801]: E1206 03:06:10.935817 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935825 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.935852 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.936369 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.936392 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.936430 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.936432 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.936534 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.936886 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.937343 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.937525 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.937534 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.939603 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.940987 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.941102 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.981499 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:10Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:10 crc kubenswrapper[4801]: I1206 03:06:10.995707 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:10Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.010389 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.012227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.012279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.012289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.012308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.012318 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.025331 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.038453 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.052946 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.062725 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-cni-bin\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.062835 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-netns\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.062916 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-socket-dir-parent\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.062967 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-kubelet\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063001 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-ovn-kubernetes\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063050 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-k8s-cni-cncf-io\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063222 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063290 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-slash\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063313 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qf62\" (UniqueName: \"kubernetes.io/projected/b06bf6d5-3516-41cd-b649-1ad8521969c2-kube-api-access-2qf62\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063342 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-kubelet\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063389 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-systemd-units\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063413 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-os-release\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063565 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-bin\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063642 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-log-socket\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063724 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-system-cni-dir\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063829 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063884 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-systemd\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063921 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-ovn\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063957 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-netd\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.063994 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-hostroot\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064027 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-proxy-tls\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064059 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-node-log\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2f9\" (UniqueName: \"kubernetes.io/projected/2cd76211-e203-4b5b-98b0-102d3d67315d-kube-api-access-qs2f9\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064135 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-script-lib\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b06bf6d5-3516-41cd-b649-1ad8521969c2-hosts-file\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064207 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-daemon-config\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064247 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-mcd-auth-proxy-config\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064279 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-config\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064314 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-cni-multus\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064388 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-etc-kubernetes\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064430 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-var-lib-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064469 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-etc-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064493 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-system-cni-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064514 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bnw\" (UniqueName: \"kubernetes.io/projected/9695c5a7-610b-4c76-aa6f-b4f06f20823e-kube-api-access-j5bnw\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064561 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cd76211-e203-4b5b-98b0-102d3d67315d-ovn-node-metrics-cert\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064583 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9695c5a7-610b-4c76-aa6f-b4f06f20823e-cni-binary-copy\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064607 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-conf-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064630 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-multus-certs\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064656 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/702cb807-2b51-4192-bf87-5df8398a8cf2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064675 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/702cb807-2b51-4192-bf87-5df8398a8cf2-cni-binary-copy\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064692 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-env-overrides\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064711 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-cni-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064732 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-cnibin\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064798 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-netns\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-rootfs\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064882 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-cnibin\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064908 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-os-release\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064937 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxf65\" (UniqueName: \"kubernetes.io/projected/702cb807-2b51-4192-bf87-5df8398a8cf2-kube-api-access-fxf65\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.064974 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twbsq\" (UniqueName: \"kubernetes.io/projected/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-kube-api-access-twbsq\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.073587 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.092663 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.114020 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.115468 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.115516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.115526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.115544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.115560 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.146079 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166562 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-cnibin\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166632 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-netns\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166664 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-rootfs\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166687 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/702cb807-2b51-4192-bf87-5df8398a8cf2-cni-binary-copy\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166728 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-env-overrides\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166799 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-cnibin\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166841 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-rootfs\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166856 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-netns\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166860 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-cni-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.166982 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-cni-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twbsq\" (UniqueName: \"kubernetes.io/projected/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-kube-api-access-twbsq\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167137 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-cnibin\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167164 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-os-release\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167191 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxf65\" (UniqueName: \"kubernetes.io/projected/702cb807-2b51-4192-bf87-5df8398a8cf2-kube-api-access-fxf65\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167229 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-netns\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-cni-bin\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167282 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-socket-dir-parent\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167385 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-kubelet\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167410 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-env-overrides\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167472 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-ovn-kubernetes\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167489 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-netns\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-k8s-cni-cncf-io\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167533 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-k8s-cni-cncf-io\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167541 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-kubelet\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167567 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-systemd-units\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167577 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-cni-bin\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-slash\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167627 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qf62\" (UniqueName: \"kubernetes.io/projected/b06bf6d5-3516-41cd-b649-1ad8521969c2-kube-api-access-2qf62\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167628 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-socket-dir-parent\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-os-release\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167673 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-ovn-kubernetes\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167706 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-os-release\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167711 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-kubelet\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-bin\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/702cb807-2b51-4192-bf87-5df8398a8cf2-cni-binary-copy\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167789 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-os-release\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167833 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-systemd-units\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167852 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-log-socket\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167765 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-cnibin\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-bin\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167847 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-slash\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167943 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-systemd\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167952 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.167977 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-log-socket\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168028 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-ovn\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168032 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-kubelet\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168069 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-systemd\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168115 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-ovn\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168224 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-system-cni-dir\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168269 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-system-cni-dir\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168271 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-netd\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168356 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-hostroot\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168384 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-proxy-tls\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168410 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-node-log\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168437 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2f9\" (UniqueName: \"kubernetes.io/projected/2cd76211-e203-4b5b-98b0-102d3d67315d-kube-api-access-qs2f9\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168472 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-mcd-auth-proxy-config\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168510 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-config\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168515 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-netd\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168548 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-script-lib\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168575 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b06bf6d5-3516-41cd-b649-1ad8521969c2-hosts-file\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168603 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-daemon-config\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168632 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-cni-multus\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-etc-kubernetes\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5bnw\" (UniqueName: \"kubernetes.io/projected/9695c5a7-610b-4c76-aa6f-b4f06f20823e-kube-api-access-j5bnw\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168747 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-var-lib-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168794 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-etc-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168817 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-system-cni-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168842 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-multus-certs\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168873 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/702cb807-2b51-4192-bf87-5df8398a8cf2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168904 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cd76211-e203-4b5b-98b0-102d3d67315d-ovn-node-metrics-cert\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168928 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9695c5a7-610b-4c76-aa6f-b4f06f20823e-cni-binary-copy\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168949 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-conf-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169037 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-conf-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-node-log\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169584 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-run-multus-certs\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169654 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-system-cni-dir\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169654 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-etc-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169773 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-mcd-auth-proxy-config\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169799 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-var-lib-openvswitch\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.169847 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b06bf6d5-3516-41cd-b649-1ad8521969c2-hosts-file\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170015 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/702cb807-2b51-4192-bf87-5df8398a8cf2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.168476 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-hostroot\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170111 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-host-var-lib-cni-multus\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170127 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9695c5a7-610b-4c76-aa6f-b4f06f20823e-etc-kubernetes\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170448 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9695c5a7-610b-4c76-aa6f-b4f06f20823e-cni-binary-copy\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170546 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/702cb807-2b51-4192-bf87-5df8398a8cf2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-script-lib\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.170855 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9695c5a7-610b-4c76-aa6f-b4f06f20823e-multus-daemon-config\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.171053 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-config\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.171272 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.176316 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cd76211-e203-4b5b-98b0-102d3d67315d-ovn-node-metrics-cert\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.189311 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-proxy-tls\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.203877 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2f9\" (UniqueName: \"kubernetes.io/projected/2cd76211-e203-4b5b-98b0-102d3d67315d-kube-api-access-qs2f9\") pod \"ovnkube-node-8qjvm\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.204046 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5bnw\" (UniqueName: \"kubernetes.io/projected/9695c5a7-610b-4c76-aa6f-b4f06f20823e-kube-api-access-j5bnw\") pod \"multus-4gxwt\" (UID: \"9695c5a7-610b-4c76-aa6f-b4f06f20823e\") " pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.205326 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twbsq\" (UniqueName: \"kubernetes.io/projected/54a0ee06-a8e7-4d96-844f-d0dd3c90e900-kube-api-access-twbsq\") pod \"machine-config-daemon-mjmtt\" (UID: \"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\") " pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.212256 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.212406 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:11 crc kubenswrapper[4801]: E1206 03:06:11.212543 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.212620 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxf65\" (UniqueName: \"kubernetes.io/projected/702cb807-2b51-4192-bf87-5df8398a8cf2-kube-api-access-fxf65\") pod \"multus-additional-cni-plugins-dcvff\" (UID: \"702cb807-2b51-4192-bf87-5df8398a8cf2\") " pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: E1206 03:06:11.212680 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.217769 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.217814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.217830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.217851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.217888 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.219048 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.237494 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.245006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.258929 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.259744 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4gxwt" Dec 06 03:06:11 crc kubenswrapper[4801]: W1206 03:06:11.259836 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cd76211_e203_4b5b_98b0_102d3d67315d.slice/crio-06016df91adfb0b0b32700c0adce7633f10b47b7fe433ea9723bc95643761839 WatchSource:0}: Error finding container 06016df91adfb0b0b32700c0adce7633f10b47b7fe433ea9723bc95643761839: Status 404 returned error can't find the container with id 06016df91adfb0b0b32700c0adce7633f10b47b7fe433ea9723bc95643761839 Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.269660 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.275634 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dcvff" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.276416 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.292497 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.315028 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.330871 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.330912 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.330922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.330943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.330955 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.341330 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.359511 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"06016df91adfb0b0b32700c0adce7633f10b47b7fe433ea9723bc95643761839"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.360536 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerStarted","Data":"d701800828367d6958c796ba4bd076f16bdc005af5c1ddd46e0c1e32fbae5e78"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.361464 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerStarted","Data":"c357ee5c884b45c5f6688b4b74051823b67e31ba57f98a57449e86a55d10a7e5"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.368421 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"5d15c2350a9419cba49a5fb6526ce1c9cb6cf2cd47b6685594fa177d29356f8a"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.380611 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.418470 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.443545 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.443591 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.443603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.443623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.443639 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.450187 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.469532 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.483177 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.499564 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:11Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.546272 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.546308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.546317 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.546333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.546344 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.648954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.649007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.649017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.649042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.649056 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.752113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.752152 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.752160 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.752181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.752193 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.855724 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.857355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.857369 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.857392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.857406 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.962112 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.962499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.962522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.962538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:11 crc kubenswrapper[4801]: I1206 03:06:11.962548 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:11Z","lastTransitionTime":"2025-12-06T03:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.064989 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.065042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.065052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.065071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.065082 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.167633 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.167688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.167699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.167719 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.167732 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.194925 4801 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.211635 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.211815 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.270948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.271006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.271025 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.271048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.271061 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.373277 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerStarted","Data":"a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.373702 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.373725 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.373738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.373798 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.373812 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.376453 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.376511 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.378985 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b" exitCode=0 Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.379112 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.381615 4801 generic.go:334] "Generic (PLEG): container finished" podID="702cb807-2b51-4192-bf87-5df8398a8cf2" containerID="6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141" exitCode=0 Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.381670 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerDied","Data":"6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.382301 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.410170 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.429958 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.442576 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.457062 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.459931 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.481823 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.487401 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.487466 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.487482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.487506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.487522 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.501008 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.519875 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.526175 4801 projected.go:194] Error preparing data for projected volume kube-api-access-2qf62 for pod openshift-dns/node-resolver-s2sg4: failed to sync configmap cache: timed out waiting for the condition Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.526295 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b06bf6d5-3516-41cd-b649-1ad8521969c2-kube-api-access-2qf62 podName:b06bf6d5-3516-41cd-b649-1ad8521969c2 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:13.026265279 +0000 UTC m=+26.148872841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2qf62" (UniqueName: "kubernetes.io/projected/b06bf6d5-3516-41cd-b649-1ad8521969c2-kube-api-access-2qf62") pod "node-resolver-s2sg4" (UID: "b06bf6d5-3516-41cd-b649-1ad8521969c2") : failed to sync configmap cache: timed out waiting for the condition Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.527927 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.543775 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.568365 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.585701 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.590887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.590927 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.590937 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.590962 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.590974 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.609912 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.631741 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.650131 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.665061 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.678233 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.694037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.694092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.694110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.694133 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.694153 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.703501 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.724317 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.739878 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.758398 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.773899 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.786117 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.786266 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:06:20.786247322 +0000 UTC m=+33.908854894 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.789507 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.796641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.796705 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.796718 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.796738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.796776 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.803862 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.823863 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.846126 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.861026 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.875618 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.887657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.887743 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.887825 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.887930 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.887963 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888057 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:20.888032521 +0000 UTC m=+34.010640293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888057 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888144 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888072 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888260 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888293 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888153 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:20.888131073 +0000 UTC m=+34.010738835 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888170 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888368 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888401 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:20.888360679 +0000 UTC m=+34.010968271 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:12 crc kubenswrapper[4801]: E1206 03:06:12.888427 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:20.888413151 +0000 UTC m=+34.011020923 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.890431 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.899873 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.900313 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.900326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.900346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.900362 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:12Z","lastTransitionTime":"2025-12-06T03:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:12 crc kubenswrapper[4801]: I1206 03:06:12.905384 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:12Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.002913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.002971 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.002981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.002998 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.003010 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.090247 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qf62\" (UniqueName: \"kubernetes.io/projected/b06bf6d5-3516-41cd-b649-1ad8521969c2-kube-api-access-2qf62\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.098043 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qf62\" (UniqueName: \"kubernetes.io/projected/b06bf6d5-3516-41cd-b649-1ad8521969c2-kube-api-access-2qf62\") pod \"node-resolver-s2sg4\" (UID: \"b06bf6d5-3516-41cd-b649-1ad8521969c2\") " pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.106148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.106191 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.106206 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.106227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.106241 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.209875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.209940 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.209959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.209985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.210001 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.212173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.212287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:13 crc kubenswrapper[4801]: E1206 03:06:13.212391 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:13 crc kubenswrapper[4801]: E1206 03:06:13.212488 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.245829 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-x2kfc"] Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.246414 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.251117 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.251198 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.251460 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.251608 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.263502 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.289484 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.305802 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.312941 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.313007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.313018 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.313037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.313059 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.331553 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.351662 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-s2sg4" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.357829 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.374738 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.387670 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.393854 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b966g\" (UniqueName: \"kubernetes.io/projected/524d8648-db2b-432b-959e-068533d1b55d-kube-api-access-b966g\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.394042 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/524d8648-db2b-432b-959e-068533d1b55d-serviceca\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.394175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/524d8648-db2b-432b-959e-068533d1b55d-host\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.396097 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.396152 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.396167 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.396184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.396198 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.396211 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.399616 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s2sg4" event={"ID":"b06bf6d5-3516-41cd-b649-1ad8521969c2","Type":"ContainerStarted","Data":"0f49b43f6858ac2487fd20d73971b35756b14322672b93554e829e84dddb2b92"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.405558 4801 generic.go:334] "Generic (PLEG): container finished" podID="702cb807-2b51-4192-bf87-5df8398a8cf2" containerID="9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa" exitCode=0 Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.405667 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerDied","Data":"9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.410181 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.415604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.415690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.415724 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.415749 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.415810 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.426684 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.442184 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.463044 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.479703 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.493570 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.495205 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/524d8648-db2b-432b-959e-068533d1b55d-serviceca\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.495274 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/524d8648-db2b-432b-959e-068533d1b55d-host\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.495410 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b966g\" (UniqueName: \"kubernetes.io/projected/524d8648-db2b-432b-959e-068533d1b55d-kube-api-access-b966g\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.497013 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/524d8648-db2b-432b-959e-068533d1b55d-serviceca\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.497135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/524d8648-db2b-432b-959e-068533d1b55d-host\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.507734 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.513883 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b966g\" (UniqueName: \"kubernetes.io/projected/524d8648-db2b-432b-959e-068533d1b55d-kube-api-access-b966g\") pod \"node-ca-x2kfc\" (UID: \"524d8648-db2b-432b-959e-068533d1b55d\") " pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.521552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.521591 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.521601 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.521620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.521632 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.525628 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.542656 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.561700 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.565069 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-x2kfc" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.576848 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: W1206 03:06:13.587531 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod524d8648_db2b_432b_959e_068533d1b55d.slice/crio-ffd523fe8a14c6b1c4eaf9bf344b7028eb2df3a49e2c2e8c885a7bc742bd8a75 WatchSource:0}: Error finding container ffd523fe8a14c6b1c4eaf9bf344b7028eb2df3a49e2c2e8c885a7bc742bd8a75: Status 404 returned error can't find the container with id ffd523fe8a14c6b1c4eaf9bf344b7028eb2df3a49e2c2e8c885a7bc742bd8a75 Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.596374 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.613711 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.626012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.626052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.626060 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.626079 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.626092 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.635730 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.658958 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.682122 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.697575 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.711377 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.724273 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.729624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.729654 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.729662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.729677 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.729686 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.746029 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.758930 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.771393 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.782026 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:13Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.832316 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.832384 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.832402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.832423 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.832438 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.935303 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.935353 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.935366 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.935385 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:13 crc kubenswrapper[4801]: I1206 03:06:13.935402 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:13Z","lastTransitionTime":"2025-12-06T03:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.038368 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.038420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.038432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.038452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.038467 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.140899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.140940 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.140952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.140968 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.140982 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.211937 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:14 crc kubenswrapper[4801]: E1206 03:06:14.212099 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.243386 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.243427 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.243435 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.243451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.243462 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.346865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.346916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.346930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.346954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.346968 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.412923 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-x2kfc" event={"ID":"524d8648-db2b-432b-959e-068533d1b55d","Type":"ContainerStarted","Data":"20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.413021 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-x2kfc" event={"ID":"524d8648-db2b-432b-959e-068533d1b55d","Type":"ContainerStarted","Data":"ffd523fe8a14c6b1c4eaf9bf344b7028eb2df3a49e2c2e8c885a7bc742bd8a75"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.417342 4801 generic.go:334] "Generic (PLEG): container finished" podID="702cb807-2b51-4192-bf87-5df8398a8cf2" containerID="233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1" exitCode=0 Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.417402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerDied","Data":"233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.419223 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-s2sg4" event={"ID":"b06bf6d5-3516-41cd-b649-1ad8521969c2","Type":"ContainerStarted","Data":"0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.434267 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.450283 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.450329 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.450339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.450359 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.450370 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.455330 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.480170 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.506367 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.526262 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.545578 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.557358 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.557399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.557409 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.557428 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.557439 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.562920 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.579618 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.595781 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.612285 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.630992 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.648862 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.668742 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.672878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.672943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.672957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.672976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.672989 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.691323 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.710067 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.726578 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.757835 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.776364 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.776413 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.776422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.776442 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.776453 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.780505 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.796963 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.814867 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.830251 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.851240 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.868065 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.879592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.879646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.879661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.879683 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.879697 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.887702 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.915506 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.932138 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.951723 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.972642 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.983005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.983300 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.983360 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.983397 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.983420 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:14Z","lastTransitionTime":"2025-12-06T03:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:14 crc kubenswrapper[4801]: I1206 03:06:14.988879 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:14Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.005376 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.086999 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.087122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.087150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.087175 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.087194 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.190427 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.190473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.190485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.190507 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.190522 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.212502 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.212546 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.212719 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.212945 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.292952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.293102 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.293124 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.293196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.293220 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.397371 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.397440 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.397467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.397501 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.397528 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.427745 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.430282 4801 generic.go:334] "Generic (PLEG): container finished" podID="702cb807-2b51-4192-bf87-5df8398a8cf2" containerID="baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef" exitCode=0 Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.430338 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerDied","Data":"baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.468351 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.488556 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.502673 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.502785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.502803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.502828 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.502849 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.513811 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.528482 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.545612 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.558970 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.575166 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.590515 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.604554 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.605902 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.605928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.605936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.605953 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.605964 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.618122 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.640221 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.671599 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.692578 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.711004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.711055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.711065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.711085 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.711097 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.711898 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.727459 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.744483 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.744521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.744530 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.744548 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.744559 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.761120 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.765928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.765978 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.765997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.766021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.766037 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.780371 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.785154 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.785191 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.785203 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.785219 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.785232 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.800812 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.806391 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.806446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.806462 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.806483 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.806497 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.820809 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.825929 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.825979 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.825993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.826015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.826031 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.843966 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:15Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:15 crc kubenswrapper[4801]: E1206 03:06:15.844099 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.846521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.846551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.846561 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.846574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.846582 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.950523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.950614 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.950645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.950872 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:15 crc kubenswrapper[4801]: I1206 03:06:15.950912 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:15Z","lastTransitionTime":"2025-12-06T03:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.056397 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.056473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.056495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.056526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.056547 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.161779 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.162084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.162146 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.162267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.162334 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.212318 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:16 crc kubenswrapper[4801]: E1206 03:06:16.212507 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.265882 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.265921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.265934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.265953 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.265966 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.369302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.369356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.369374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.369398 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.369416 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.440011 4801 generic.go:334] "Generic (PLEG): container finished" podID="702cb807-2b51-4192-bf87-5df8398a8cf2" containerID="c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794" exitCode=0 Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.440102 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerDied","Data":"c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.458937 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.480334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.480394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.480408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.480431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.480452 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.482675 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.504340 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.521438 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.585187 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.585240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.585251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.585271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.585281 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.585589 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.608414 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.625175 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.638390 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.655986 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.671394 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.685929 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.687775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.687822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.687840 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.687864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.687879 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.710451 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.728154 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.742464 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.755240 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:16Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.791107 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.791354 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.791462 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.791537 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.791603 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.894646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.895090 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.895240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.895380 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.895511 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.997722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.998120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.998428 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.998573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:16 crc kubenswrapper[4801]: I1206 03:06:16.998711 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:16Z","lastTransitionTime":"2025-12-06T03:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.101695 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.101781 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.101798 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.101829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.101848 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.205470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.205878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.205906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.205943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.205967 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.211808 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.211852 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:17 crc kubenswrapper[4801]: E1206 03:06:17.212053 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:17 crc kubenswrapper[4801]: E1206 03:06:17.212176 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.231533 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.248106 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.260829 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.281630 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.299940 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.308510 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.308551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.308562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.308581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.308593 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.312143 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.333443 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.351293 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.365011 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.379401 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.392665 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.406026 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.413086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.413130 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.413144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.413162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.413176 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.427739 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.444268 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.451474 4801 generic.go:334] "Generic (PLEG): container finished" podID="702cb807-2b51-4192-bf87-5df8398a8cf2" containerID="3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3" exitCode=0 Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.451539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerDied","Data":"3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.459165 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.473307 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.489501 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.506105 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.518482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.518535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.518546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.518564 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.518575 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.523258 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.535270 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.559522 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.574665 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.589866 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.604168 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.619803 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.621082 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.621132 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.621146 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.621169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.621186 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.635629 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.649039 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.668115 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.684530 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.698521 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.723741 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.723794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.723804 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.723820 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.723835 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.826087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.826149 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.826163 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.826188 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.826205 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.835806 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.850495 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.867283 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.888428 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.906664 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.925741 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.930570 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.930639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.930659 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.930687 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.930706 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:17Z","lastTransitionTime":"2025-12-06T03:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.951636 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.969793 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:17 crc kubenswrapper[4801]: I1206 03:06:17.993153 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.008403 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.030277 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.033521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.033575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.033585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.033606 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.033620 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.048316 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.069162 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.085537 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.103561 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.128014 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.135710 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.135768 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.135777 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.135794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.135803 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.212045 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:18 crc kubenswrapper[4801]: E1206 03:06:18.212187 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.238709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.238778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.238786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.238810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.238820 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.341412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.341464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.341475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.341494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.341504 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.444830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.444914 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.444934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.444961 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.444981 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.459607 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" event={"ID":"702cb807-2b51-4192-bf87-5df8398a8cf2","Type":"ContainerStarted","Data":"5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.467369 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.467740 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.467823 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.467838 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.486148 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.501037 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.504376 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.512909 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.534978 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.548403 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.548477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.548501 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.548535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.548558 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.553720 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.573898 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.602599 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.615776 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.638813 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.651698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.651777 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.651794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.651816 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.651831 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.659051 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.678611 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.689990 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.722615 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.739932 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.753138 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.755054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.755099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.755114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.755136 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.755151 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.763625 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.781911 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.795191 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.807863 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.820439 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.833469 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.847806 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.858067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.858120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.858133 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.858156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.858170 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.862099 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.876419 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.891445 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.907350 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.927377 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.943224 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.954957 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.960253 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.960295 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.960306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.960323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.960334 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:18Z","lastTransitionTime":"2025-12-06T03:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.969227 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:18 crc kubenswrapper[4801]: I1206 03:06:18.984860 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:18Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.063495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.063544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.063554 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.063571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.063582 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.166299 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.166362 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.166382 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.166410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.166431 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.211658 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.211696 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:19 crc kubenswrapper[4801]: E1206 03:06:19.211915 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:19 crc kubenswrapper[4801]: E1206 03:06:19.212081 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.269190 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.269226 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.269234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.269251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.269259 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.371903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.372242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.372255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.372277 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.372292 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.475443 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.475502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.475515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.475535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.475548 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.578300 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.578342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.578353 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.578370 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.578382 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.681906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.681952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.681964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.681981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.681992 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.787152 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.787220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.787232 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.787255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.787267 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.890407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.890468 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.890482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.890500 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.890514 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.994635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.994690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.994703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.994722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:19 crc kubenswrapper[4801]: I1206 03:06:19.994736 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:19Z","lastTransitionTime":"2025-12-06T03:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.097838 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.097894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.097905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.097922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.097934 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.201218 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.201288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.201302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.201325 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.201345 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.211843 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.212040 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.305103 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.305144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.305156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.305178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.305192 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.408411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.408464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.408473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.408493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.408503 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.478503 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/0.log" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.482232 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8" exitCode=1 Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.482296 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.483654 4801 scope.go:117] "RemoveContainer" containerID="d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.499954 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.511005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.511043 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.511054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.511071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.511084 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.553341 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.566337 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.596545 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.613599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.613638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.613648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.613662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.613672 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.618644 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.635474 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.651929 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.673435 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.691051 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.704835 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.716538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.716564 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.716572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.716587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.716597 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.719431 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.734307 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.749546 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.769806 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.786696 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:20Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.819386 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.819432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.819447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.819467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.819480 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.887225 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.887795 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:06:36.887737407 +0000 UTC m=+50.010345019 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.922346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.922394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.922412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.922439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.922456 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:20Z","lastTransitionTime":"2025-12-06T03:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.989229 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.989348 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.989404 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:20 crc kubenswrapper[4801]: I1206 03:06:20.989445 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989454 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989573 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:36.989540275 +0000 UTC m=+50.112147927 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989585 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989669 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:36.989642477 +0000 UTC m=+50.112250079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989708 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989798 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989813 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989712 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989897 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:36.989861584 +0000 UTC m=+50.112469156 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989912 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.989942 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:20 crc kubenswrapper[4801]: E1206 03:06:20.990038 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:36.990011208 +0000 UTC m=+50.112618980 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.025131 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.025187 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.025200 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.025217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.025228 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.128252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.128320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.128345 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.128372 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.128392 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.211410 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.211475 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:21 crc kubenswrapper[4801]: E1206 03:06:21.211562 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:21 crc kubenswrapper[4801]: E1206 03:06:21.211697 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.230526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.230587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.230602 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.230632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.230647 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.333591 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.333645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.333664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.333685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.333699 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.436923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.436996 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.437014 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.437041 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.437057 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.488835 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/0.log" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.492187 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.492649 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.510909 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.540421 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.540477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.540495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.540538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.540559 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.540966 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.564041 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.583325 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.605809 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.622286 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.643316 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.644346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.644400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.644419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.644446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.644464 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.671495 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.686087 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.712874 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.737268 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.748312 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.748374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.748389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.748414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.748428 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.755920 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.774692 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.792036 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.812634 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:21Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.852054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.852110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.852123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.852145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.852162 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.955440 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.955494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.955509 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.955531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:21 crc kubenswrapper[4801]: I1206 03:06:21.955589 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:21Z","lastTransitionTime":"2025-12-06T03:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.059450 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.059508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.059522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.059543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.059560 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.162968 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.163027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.163040 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.163066 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.163079 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.212177 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:22 crc kubenswrapper[4801]: E1206 03:06:22.212470 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.266734 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.266861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.266878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.266899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.266910 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.370595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.370648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.370658 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.370678 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.370691 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.473310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.473415 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.473445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.473484 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.473511 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.576485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.576572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.576594 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.576625 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.576650 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.680999 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.681078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.681097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.681124 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.681144 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.786220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.786309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.786326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.786357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.786404 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.889315 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.889383 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.889394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.889413 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.889425 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.993659 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.993727 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.993741 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.993787 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:22 crc kubenswrapper[4801]: I1206 03:06:22.993807 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:22Z","lastTransitionTime":"2025-12-06T03:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.097096 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.097158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.097171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.097194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.097208 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.203065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.203113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.203123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.203140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.203152 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.206339 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx"] Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.206988 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.210682 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.212139 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.212205 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:23 crc kubenswrapper[4801]: E1206 03:06:23.212363 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.212448 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:23 crc kubenswrapper[4801]: E1206 03:06:23.212515 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.229636 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.246935 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.260538 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.283275 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.302242 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.306511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.306574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.306598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.306635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.306651 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.319724 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.319938 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.320009 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-env-overrides\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.320200 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmzwc\" (UniqueName: \"kubernetes.io/projected/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-kube-api-access-wmzwc\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.320600 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.339589 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.353435 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.387342 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.409742 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.409806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.409824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.409848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.409868 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.412041 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.421927 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.422023 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.422087 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-env-overrides\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.422129 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmzwc\" (UniqueName: \"kubernetes.io/projected/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-kube-api-access-wmzwc\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.423291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.423458 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-env-overrides\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.430498 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.433607 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.446939 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmzwc\" (UniqueName: \"kubernetes.io/projected/e05c2b2b-91fd-47d5-8af2-7e79eabe1585-kube-api-access-wmzwc\") pod \"ovnkube-control-plane-749d76644c-md5jx\" (UID: \"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.451268 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.465287 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.483496 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.500192 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.500456 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/1.log" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.501284 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/0.log" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.504732 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252" exitCode=1 Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.504802 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.504875 4801 scope.go:117] "RemoveContainer" containerID="d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.506175 4801 scope.go:117] "RemoveContainer" containerID="a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252" Dec 06 03:06:23 crc kubenswrapper[4801]: E1206 03:06:23.506576 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.513373 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.513539 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.513637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.513745 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.513873 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.518371 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.526963 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.548618 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.570929 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.595657 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.615176 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.617474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.617716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.617735 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.617793 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.617814 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.632825 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.648817 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.664851 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.679961 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.708644 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.721313 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.721375 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.721388 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.721411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.721429 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.731796 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.752955 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.772294 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.789373 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.812611 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.825593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.825652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.825673 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.825703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.825727 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.831388 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.848646 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:23Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.929078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.929128 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.929144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.929170 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:23 crc kubenswrapper[4801]: I1206 03:06:23.929187 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:23Z","lastTransitionTime":"2025-12-06T03:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.032339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.032420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.032450 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.032487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.032512 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.136241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.136321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.136341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.136374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.136393 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.211413 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:24 crc kubenswrapper[4801]: E1206 03:06:24.211645 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.239722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.239795 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.239815 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.239865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.239882 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.342872 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.342926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.342935 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.342954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.342969 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.445267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.445307 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.445319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.445338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.445349 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.508977 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" event={"ID":"e05c2b2b-91fd-47d5-8af2-7e79eabe1585","Type":"ContainerStarted","Data":"68237a2e33d2d72484fa0348efa0109ce3a6024eed6c8c1c162a78ebe9b5739d"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.548022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.548137 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.548161 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.548185 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.548205 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.650911 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.650966 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.650979 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.651003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.651019 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.754159 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.754239 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.754265 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.754297 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.754322 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.857468 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.857854 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.857978 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.858083 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.858189 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.961851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.962282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.962356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.962439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:24 crc kubenswrapper[4801]: I1206 03:06:24.962501 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:24Z","lastTransitionTime":"2025-12-06T03:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.042273 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-wpnbx"] Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.043084 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.043196 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.061661 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.065914 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.066047 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.066510 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.066654 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.066724 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.076021 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.089620 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.110339 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.123981 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.134623 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.146191 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.146434 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vssxc\" (UniqueName: \"kubernetes.io/projected/134354b0-1613-4536-aaf8-4e5ad12705f9-kube-api-access-vssxc\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.146503 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.157318 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.168835 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.168872 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.168882 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.168900 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.168910 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.171426 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.184083 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.195876 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.212258 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.212259 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.212413 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.212443 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.224177 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.249574 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.249662 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vssxc\" (UniqueName: \"kubernetes.io/projected/134354b0-1613-4536-aaf8-4e5ad12705f9-kube-api-access-vssxc\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.250151 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.250203 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:25.750184184 +0000 UTC m=+38.872791766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.266487 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.271706 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.271770 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.271787 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.271805 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.271817 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.285899 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vssxc\" (UniqueName: \"kubernetes.io/projected/134354b0-1613-4536-aaf8-4e5ad12705f9-kube-api-access-vssxc\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.294996 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.319842 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.349491 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.364339 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.374560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.374593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.374602 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.374620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.374630 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.477475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.477874 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.477972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.478126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.478246 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.514311 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/1.log" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.520120 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" event={"ID":"e05c2b2b-91fd-47d5-8af2-7e79eabe1585","Type":"ContainerStarted","Data":"350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.581171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.581221 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.581257 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.581274 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.581286 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.684613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.684669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.684682 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.684706 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.684719 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.754356 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.754571 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.754652 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:26.754632115 +0000 UTC m=+39.877239687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.787544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.787582 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.787592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.787609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.787620 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.890402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.890785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.890801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.890826 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.890842 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.903298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.903333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.903342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.903357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.903367 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.915820 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.920026 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.920065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.920081 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.920101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.920118 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.933011 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.937878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.937934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.937952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.937974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.937992 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.954148 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.958079 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.958121 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.958138 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.958156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.958168 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.977186 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.982177 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.982240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.982263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.982287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:25 crc kubenswrapper[4801]: I1206 03:06:25.982308 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:25Z","lastTransitionTime":"2025-12-06T03:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.998597 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:25Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:25 crc kubenswrapper[4801]: E1206 03:06:25.998728 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.000324 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.000363 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.000372 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.000390 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.000402 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.103745 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.103816 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.103827 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.103844 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.103855 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.206747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.206803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.206813 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.206831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.206843 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.212016 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:26 crc kubenswrapper[4801]: E1206 03:06:26.212158 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.309045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.309097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.309106 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.309127 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.309139 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.412527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.412623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.412646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.412677 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.412707 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.516736 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.516870 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.516889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.516922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.516941 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.525949 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" event={"ID":"e05c2b2b-91fd-47d5-8af2-7e79eabe1585","Type":"ContainerStarted","Data":"7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.548196 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.569577 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.610140 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.619827 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.619864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.619877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.619923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.619935 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.629584 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.649741 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.669651 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.693714 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.716088 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.721993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.722053 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.722067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.722087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.722099 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.751644 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.765657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:26 crc kubenswrapper[4801]: E1206 03:06:26.765853 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:26 crc kubenswrapper[4801]: E1206 03:06:26.765938 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:28.76591418 +0000 UTC m=+41.888521752 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.776970 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.798288 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.821284 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.826515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.826561 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.826572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.826592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.826606 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.841211 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.876394 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.898379 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.918356 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.930531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.930590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.930609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.930638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.930658 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:26Z","lastTransitionTime":"2025-12-06T03:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:26 crc kubenswrapper[4801]: I1206 03:06:26.938837 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:26Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.034842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.034912 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.034930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.034962 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.034981 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.138957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.139036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.139063 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.139093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.139120 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.212263 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.212287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.212489 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:27 crc kubenswrapper[4801]: E1206 03:06:27.212470 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:27 crc kubenswrapper[4801]: E1206 03:06:27.212786 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:27 crc kubenswrapper[4801]: E1206 03:06:27.212952 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.242588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.242666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.242684 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.242710 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.242728 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.242733 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.260735 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.286690 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.305849 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.339571 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6492bd0f73c7e2ebcce4797b7fc86383154961eba2158c974a776ecc8608fc8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:20Z\\\",\\\"message\\\":\\\"4 for removal\\\\nI1206 03:06:19.886654 6144 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1206 03:06:19.886690 6144 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1206 03:06:19.886709 6144 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1206 03:06:19.886729 6144 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1206 03:06:19.886746 6144 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1206 03:06:19.886769 6144 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1206 03:06:19.886781 6144 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1206 03:06:19.886790 6144 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 03:06:19.886818 6144 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 03:06:19.886849 6144 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1206 03:06:19.886868 6144 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1206 03:06:19.886911 6144 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 03:06:19.886926 6144 handler.go:208] Removed *v1.Node event handler 2\\\\nI1206 03:06:19.886932 6144 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 03:06:19.886999 6144 handler.go:208] Removed *v1.Node event handler 7\\\\nI1206 03:06:19.886999 6144 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.345208 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.345246 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.345260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.345283 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.345298 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.358294 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.377017 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.391123 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.418142 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.443300 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.448701 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.448841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.448873 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.448910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.448936 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.464638 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.479253 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.495699 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.512873 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.529060 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.544704 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.552089 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.552139 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.552154 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.552178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.552199 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.557795 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:27Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.655114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.655158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.655171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.655189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.655201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.758350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.758431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.758451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.758486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.758502 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.861959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.862023 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.862035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.862056 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.862068 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.964738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.964797 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.964806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.964821 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:27 crc kubenswrapper[4801]: I1206 03:06:27.964830 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:27Z","lastTransitionTime":"2025-12-06T03:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.068425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.068497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.068514 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.068540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.068558 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.172639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.172697 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.172709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.172726 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.172737 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.212057 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:28 crc kubenswrapper[4801]: E1206 03:06:28.212209 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.275328 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.275423 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.275444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.275472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.275494 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.379988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.380061 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.380085 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.380132 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.380157 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.483878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.483960 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.483974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.483997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.484011 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.587985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.588066 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.588085 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.588112 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.588133 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.692247 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.692320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.692340 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.692369 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.692388 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.788603 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:28 crc kubenswrapper[4801]: E1206 03:06:28.788843 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:28 crc kubenswrapper[4801]: E1206 03:06:28.788949 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:32.788920503 +0000 UTC m=+45.911528115 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.796509 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.796567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.796590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.796624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.796647 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.901860 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.901926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.901951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.901982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:28 crc kubenswrapper[4801]: I1206 03:06:28.902007 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:28Z","lastTransitionTime":"2025-12-06T03:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.007355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.007418 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.007439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.007467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.007565 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.110824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.110867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.110879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.110897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.110909 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.211730 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.211881 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.211912 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:29 crc kubenswrapper[4801]: E1206 03:06:29.212065 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:29 crc kubenswrapper[4801]: E1206 03:06:29.212210 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:29 crc kubenswrapper[4801]: E1206 03:06:29.212388 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.215035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.215094 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.215114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.215137 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.215157 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.318012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.318083 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.318100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.318126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.318151 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.421866 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.421937 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.421955 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.421982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.422006 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.525910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.526002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.526022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.526052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.526072 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.629999 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.630078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.630098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.630126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.630147 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.733474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.733532 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.733543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.733565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.733579 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.837056 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.837138 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.837156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.837182 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.837201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.940594 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.940665 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.940682 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.940711 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:29 crc kubenswrapper[4801]: I1206 03:06:29.940726 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:29Z","lastTransitionTime":"2025-12-06T03:06:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.043700 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.043775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.043785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.043801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.043814 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.146857 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.146957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.146972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.146997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.147012 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.212076 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:30 crc kubenswrapper[4801]: E1206 03:06:30.212293 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.249352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.249433 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.249456 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.249481 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.249500 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.352126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.352218 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.352235 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.352263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.352282 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.455234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.455299 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.455323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.455355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.455374 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.558936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.559013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.559031 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.559059 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.559078 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.662377 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.662470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.662492 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.662526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.662550 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.766179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.766329 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.766348 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.766376 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.766804 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.870885 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.870929 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.870938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.870957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.870967 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.974830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.974894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.974908 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.974934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:30 crc kubenswrapper[4801]: I1206 03:06:30.974950 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:30Z","lastTransitionTime":"2025-12-06T03:06:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.078827 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.078889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.078924 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.079000 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.079028 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.182492 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.182565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.182586 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.182613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.182632 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.212219 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.212265 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.212219 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:31 crc kubenswrapper[4801]: E1206 03:06:31.212449 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:31 crc kubenswrapper[4801]: E1206 03:06:31.212679 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:31 crc kubenswrapper[4801]: E1206 03:06:31.212920 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.285829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.285907 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.285928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.285959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.285981 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.389255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.389301 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.389311 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.389329 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.389342 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.492906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.492964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.492978 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.493004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.493018 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.597674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.597746 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.597783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.597806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.597820 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.701666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.701738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.701785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.701815 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.701833 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.805010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.805055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.805071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.805091 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.805105 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.909039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.909100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.909117 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.909141 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:31 crc kubenswrapper[4801]: I1206 03:06:31.909155 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:31Z","lastTransitionTime":"2025-12-06T03:06:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.012704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.012994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.013022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.013052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.013077 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.115672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.115730 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.115783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.115814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.115833 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.211324 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:32 crc kubenswrapper[4801]: E1206 03:06:32.211513 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.218290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.218325 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.218336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.218352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.218364 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.321472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.321522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.321534 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.321553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.321569 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.425256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.425319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.425331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.425352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.425366 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.528841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.528910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.528927 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.528955 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.528973 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.632434 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.632481 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.632491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.632509 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.632520 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.735787 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.735846 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.735857 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.735881 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.735892 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.839174 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.839229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.839243 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.839263 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.839278 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.842868 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:32 crc kubenswrapper[4801]: E1206 03:06:32.843098 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:32 crc kubenswrapper[4801]: E1206 03:06:32.843182 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:40.843159837 +0000 UTC m=+53.965767409 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.942837 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.942877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.942887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.942901 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:32 crc kubenswrapper[4801]: I1206 03:06:32.942910 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:32Z","lastTransitionTime":"2025-12-06T03:06:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.046571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.046661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.046736 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.046787 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.046828 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.150493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.150563 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.150581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.150609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.150630 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.212319 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:33 crc kubenswrapper[4801]: E1206 03:06:33.212501 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.213134 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:33 crc kubenswrapper[4801]: E1206 03:06:33.213270 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.213448 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:33 crc kubenswrapper[4801]: E1206 03:06:33.213566 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.253798 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.253863 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.253883 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.253910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.253933 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.356847 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.356951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.356974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.357006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.357023 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.460664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.460725 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.460743 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.460804 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.460826 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.563135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.563172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.563183 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.563199 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.563209 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.665535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.665584 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.665596 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.665614 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.665626 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.768463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.768514 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.768527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.768546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.768558 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.871819 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.871864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.871879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.871897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.871909 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.975242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.975314 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.975333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.975451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:33 crc kubenswrapper[4801]: I1206 03:06:33.975479 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:33Z","lastTransitionTime":"2025-12-06T03:06:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.079412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.079467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.079480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.079499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.079514 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.183372 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.183437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.183454 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.183478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.183495 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.211974 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:34 crc kubenswrapper[4801]: E1206 03:06:34.212123 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.213374 4801 scope.go:117] "RemoveContainer" containerID="a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.247059 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.272881 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.285976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.286028 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.286040 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.286061 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.286075 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.289659 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.305097 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.321679 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.336712 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.358967 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.377531 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.388822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.388883 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.388897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.388919 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.388934 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.396845 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.433235 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.456445 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.474829 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.489859 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.492125 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.492328 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.492450 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.492568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.492678 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.508090 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.527029 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.542671 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.558078 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.561040 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/1.log" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.566272 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.568126 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.587701 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.595278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.595310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.595327 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.595351 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.595367 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.607812 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.633395 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.648163 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.664120 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.680237 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.680607 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.691504 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.698401 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.698546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.698833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.698848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.698869 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.698879 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.713091 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.726338 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.748192 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.761012 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.795567 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.801499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.801670 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.801775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.801853 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.801919 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.813411 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.836087 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.855880 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.868070 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.881079 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.894628 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.905347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.905384 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.905392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.905408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.905419 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:34Z","lastTransitionTime":"2025-12-06T03:06:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.910083 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.923326 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.948118 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.966956 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:34 crc kubenswrapper[4801]: I1206 03:06:34.985978 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:34Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.004388 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.008623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.008674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.008688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.008711 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.008724 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.028414 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.042057 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.056150 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.068231 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.080330 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.093145 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.104414 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.113672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.114009 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.114136 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.114225 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.114301 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.127973 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.145253 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.197714 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.211964 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:35 crc kubenswrapper[4801]: E1206 03:06:35.212159 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.212436 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:35 crc kubenswrapper[4801]: E1206 03:06:35.212526 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.213029 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:35 crc kubenswrapper[4801]: E1206 03:06:35.213117 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.222405 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.222446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.222457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.222503 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.222518 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.224829 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:35Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.325773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.325825 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.325838 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.325856 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.325867 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.429172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.429494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.429620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.429704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.429794 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.533107 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.533162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.533178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.533202 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.533218 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.636834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.636886 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.636898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.636925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.636938 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.739948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.739997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.740010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.740030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.740043 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.842093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.842135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.842146 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.842164 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.842177 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.946072 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.946505 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.946636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.946806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:35 crc kubenswrapper[4801]: I1206 03:06:35.946969 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:35Z","lastTransitionTime":"2025-12-06T03:06:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.050014 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.050059 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.050074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.050095 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.050109 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.070250 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.070319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.070344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.070370 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.070391 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.087842 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.092384 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.092420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.092433 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.092452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.092462 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.105366 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.110565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.110593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.110603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.110615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.110624 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.121669 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.126036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.126064 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.126073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.126084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.126092 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.142725 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.147079 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.147105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.147114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.147127 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.147136 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.164594 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.164731 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.166362 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.166392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.166402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.166415 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.166428 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.212113 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.212249 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.273056 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.273135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.273168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.273241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.273291 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.376494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.377079 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.377101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.377132 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.377153 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.480473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.480544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.480566 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.480596 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.480616 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.578114 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/2.log" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.579352 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/1.log" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583090 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583165 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583739 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28" exitCode=1 Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.583817 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.584106 4801 scope.go:117] "RemoveContainer" containerID="a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.585508 4801 scope.go:117] "RemoveContainer" containerID="12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28" Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.585863 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.609464 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.629310 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.649333 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.667841 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.683904 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.687216 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.687261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.687273 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.687292 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.687306 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.709430 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.734990 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.757089 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.779443 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.790342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.790438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.790458 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.790482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.790500 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.798087 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.825661 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.846615 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.866680 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.886497 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.889990 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.890303 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:07:08.890244682 +0000 UTC m=+82.012852334 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.892955 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.893018 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.893042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.893073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.893096 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.914530 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.935395 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.951202 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.966204 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:36Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.991025 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.991396 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.991583 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.991248 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.991791 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.991815 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.991540 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.991912 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.992122 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992149 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:07:08.992118113 +0000 UTC m=+82.114725875 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992244 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992412 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992434 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992360 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:07:08.992337858 +0000 UTC m=+82.114945450 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992529 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:07:08.992516073 +0000 UTC m=+82.115123875 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:06:36 crc kubenswrapper[4801]: E1206 03:06:36.992546 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:07:08.992539884 +0000 UTC m=+82.115147716 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.996927 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.996982 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.996997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.997021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:36 crc kubenswrapper[4801]: I1206 03:06:36.997037 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:36Z","lastTransitionTime":"2025-12-06T03:06:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.100897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.100954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.100969 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.100990 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.101003 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.203839 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.204141 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.204203 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.204265 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.204326 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.212401 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.212476 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.212581 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:37 crc kubenswrapper[4801]: E1206 03:06:37.212799 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:37 crc kubenswrapper[4801]: E1206 03:06:37.212980 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:37 crc kubenswrapper[4801]: E1206 03:06:37.213209 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.233185 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.255521 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.276427 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.300335 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.314585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.314649 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.314662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.314688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.314704 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.328952 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.355544 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.375308 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.398221 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.413868 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.419203 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.419256 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.419274 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.419306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.419324 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.431448 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.448333 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.465620 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.499822 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.515127 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.522033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.522093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.522110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.522140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.522159 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.532607 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.549013 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.567615 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.589397 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:37Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.591729 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/2.log" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.625457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.625505 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.625522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.625547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.625565 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.728285 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.728344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.728367 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.728396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.728418 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.831361 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.831396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.831408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.831426 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.831439 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.933938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.934004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.934021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.934049 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:37 crc kubenswrapper[4801]: I1206 03:06:37.934074 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:37Z","lastTransitionTime":"2025-12-06T03:06:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.037335 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.037390 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.037411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.037444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.037466 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.140986 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.142092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.142321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.142556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.142793 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.211498 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:38 crc kubenswrapper[4801]: E1206 03:06:38.212077 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.246325 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.246646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.246944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.247326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.247408 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.350539 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.350587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.350597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.350615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.350627 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.454451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.454883 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.455077 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.455357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.455727 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.559706 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.559805 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.559826 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.559856 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.559875 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.662743 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.662805 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.662822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.662842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.662852 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.765858 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.765914 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.765923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.765942 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.765958 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.869504 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.869553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.869566 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.869586 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.869601 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.972974 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.973015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.973024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.973039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:38 crc kubenswrapper[4801]: I1206 03:06:38.973050 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:38Z","lastTransitionTime":"2025-12-06T03:06:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.076379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.076432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.076447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.076467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.076479 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.180105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.180186 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.180201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.180247 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.180263 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.211949 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.212057 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:39 crc kubenswrapper[4801]: E1206 03:06:39.212100 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:39 crc kubenswrapper[4801]: E1206 03:06:39.212238 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.212457 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:39 crc kubenswrapper[4801]: E1206 03:06:39.212576 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.283456 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.283508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.283526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.283551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.283569 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.386807 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.386920 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.386939 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.386970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.386990 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.490320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.490400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.490412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.490431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.490443 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.594350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.594426 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.594455 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.594487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.594508 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.697561 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.697621 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.697633 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.697656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.697670 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.800326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.800380 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.800392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.800407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.800418 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.903480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.903976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.904037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.904073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:39 crc kubenswrapper[4801]: I1206 03:06:39.904095 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:39Z","lastTransitionTime":"2025-12-06T03:06:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.008640 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.008726 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.008744 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.008802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.008821 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.112255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.112344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.112367 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.112397 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.112417 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.211834 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:40 crc kubenswrapper[4801]: E1206 03:06:40.212147 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.215473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.215510 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.215521 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.215541 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.215557 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.320646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.321124 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.321199 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.321306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.321397 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.425189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.425688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.425913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.426054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.426219 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.530505 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.530595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.530618 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.530652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.530676 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.633888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.633976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.634003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.634030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.634048 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.737517 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.737580 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.737597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.737624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.737646 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.840002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.840042 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.840051 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.840067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.840078 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.942886 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.942922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:40 crc kubenswrapper[4801]: E1206 03:06:40.943036 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.943337 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:40 crc kubenswrapper[4801]: E1206 03:06:40.943408 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:06:56.943383359 +0000 UTC m=+70.065990931 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.943463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.943506 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:40 crc kubenswrapper[4801]: I1206 03:06:40.943528 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:40Z","lastTransitionTime":"2025-12-06T03:06:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.046217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.046264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.046275 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.046293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.046303 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.148724 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.149039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.149143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.149220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.149288 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.212455 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.212491 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.212644 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:41 crc kubenswrapper[4801]: E1206 03:06:41.212749 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:41 crc kubenswrapper[4801]: E1206 03:06:41.213019 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:41 crc kubenswrapper[4801]: E1206 03:06:41.213222 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.252832 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.252909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.252932 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.252959 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.252978 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.355929 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.356011 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.356029 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.356055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.356074 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.459240 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.459434 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.459475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.459508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.459531 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.562595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.562646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.562663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.562690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.562709 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.665973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.666105 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.666125 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.666149 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.666201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.769335 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.769397 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.769417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.769446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.769464 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.872174 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.872248 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.872267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.872296 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.872318 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.974361 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.974406 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.974416 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.974432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:41 crc kubenswrapper[4801]: I1206 03:06:41.974443 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:41Z","lastTransitionTime":"2025-12-06T03:06:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.077675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.077738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.077786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.077814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.077833 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.181262 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.181326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.181351 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.181385 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.181405 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.211971 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:42 crc kubenswrapper[4801]: E1206 03:06:42.212380 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.284122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.284193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.284210 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.284238 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.284257 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.386559 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.386595 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.386608 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.386625 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.386667 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.490143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.490212 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.490233 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.490259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.490277 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.594457 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.594536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.594552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.594578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.594603 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.697839 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.697892 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.697903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.697922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.697935 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.800843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.800898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.800909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.800928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.800940 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.904075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.904181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.904199 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.904230 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:42 crc kubenswrapper[4801]: I1206 03:06:42.904248 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:42Z","lastTransitionTime":"2025-12-06T03:06:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.007422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.007504 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.007543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.007567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.007580 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.111355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.111461 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.111479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.111503 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.111520 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.211877 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.212042 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:43 crc kubenswrapper[4801]: E1206 03:06:43.212288 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:43 crc kubenswrapper[4801]: E1206 03:06:43.212430 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.212650 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:43 crc kubenswrapper[4801]: E1206 03:06:43.212874 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.214842 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.215013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.215831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.215890 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.215904 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.319838 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.319947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.319958 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.320092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.320118 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.423520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.423579 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.423602 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.423633 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.423658 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.526981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.527022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.527033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.527049 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.527062 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.631212 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.631285 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.631298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.631319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.631355 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.734445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.734495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.734511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.734534 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.734549 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.836965 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.837007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.837020 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.837038 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.837050 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.940704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.940775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.940789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.940808 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:43 crc kubenswrapper[4801]: I1206 03:06:43.940822 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:43Z","lastTransitionTime":"2025-12-06T03:06:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.043877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.043926 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.043944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.043968 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.043985 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.148253 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.148319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.148338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.148368 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.148387 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.212286 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:44 crc kubenswrapper[4801]: E1206 03:06:44.212510 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.252341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.252394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.252409 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.252431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.252448 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.356408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.356471 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.356490 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.356514 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.356530 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.460415 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.460470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.460483 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.460503 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.460514 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.563967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.564019 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.564030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.564048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.564059 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.667171 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.667238 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.667264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.667293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.667313 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.770633 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.770703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.770716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.770737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.770771 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.873115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.873205 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.873233 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.873268 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.873293 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.976852 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.976904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.976918 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.976944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:44 crc kubenswrapper[4801]: I1206 03:06:44.976968 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:44Z","lastTransitionTime":"2025-12-06T03:06:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.081196 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.081280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.081304 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.081337 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.081361 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.185358 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.185480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.185499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.185527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.185540 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.212174 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:45 crc kubenswrapper[4801]: E1206 03:06:45.212316 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.212184 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.212393 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:45 crc kubenswrapper[4801]: E1206 03:06:45.212512 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:45 crc kubenswrapper[4801]: E1206 03:06:45.212608 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.288580 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.288648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.288664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.288692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.288709 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.392214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.392280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.392300 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.392327 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.392346 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.496093 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.496162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.496182 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.496211 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.496234 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.599495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.599542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.599557 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.599581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.599597 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.703192 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.703269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.703289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.703318 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.703337 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.807452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.807515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.807527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.807548 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.807563 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.910438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.910479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.910491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.910510 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:45 crc kubenswrapper[4801]: I1206 03:06:45.910525 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:45Z","lastTransitionTime":"2025-12-06T03:06:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.013635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.013674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.013685 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.013706 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.013721 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.116453 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.116502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.116520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.116544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.116558 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.211971 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.212205 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.218992 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.219056 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.219072 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.219099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.219118 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.239051 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.239118 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.239129 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.239146 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.239157 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.254714 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:46Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.258438 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.258497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.258511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.258532 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.258551 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.270803 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:46Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.274628 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.274679 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.274701 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.274722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.274733 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.286690 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:46Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.290634 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.290704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.290716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.290740 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.290777 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.307363 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:46Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.312989 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.313035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.313048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.313067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.313080 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.326241 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:46Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:46 crc kubenswrapper[4801]: E1206 03:06:46.326415 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.328429 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.328461 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.328470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.328487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.328500 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.436597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.436688 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.436701 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.436721 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.436733 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.547993 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.548055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.548074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.548100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.548119 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.650829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.650877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.650889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.650909 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.650920 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.753935 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.754002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.754021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.754046 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.754065 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.858016 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.858109 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.858134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.858168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.858191 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.961460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.961578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.961600 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.961627 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:46 crc kubenswrapper[4801]: I1206 03:06:46.961645 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:46Z","lastTransitionTime":"2025-12-06T03:06:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.065032 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.065076 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.065086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.065101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.065110 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.167444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.167499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.167508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.167527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.167539 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.211864 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.212029 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:47 crc kubenswrapper[4801]: E1206 03:06:47.212185 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:47 crc kubenswrapper[4801]: E1206 03:06:47.212404 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.212214 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:47 crc kubenswrapper[4801]: E1206 03:06:47.212597 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.234427 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.251715 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.265097 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.271178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.271524 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.271604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.271679 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.271775 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.278939 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.301075 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.319283 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.338681 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.352839 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.364837 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.374946 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.375475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.375729 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.376026 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.376921 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.379307 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.397499 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.410108 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.425518 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.438786 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.458919 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7e49326bb7f8e2ba36bc8da38f29f72b99188c2b27009d0eae77a61df5fc252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"message\\\":\\\"entity-vrzqb\\\\nI1206 03:06:21.463860 6274 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1206 03:06:21.463694 6274 services_controller.go:445] Built service openshift-operator-lifecycle-manager/packageserver-service LB template configs for network=default: []services.lbConfig(nil)\\\\nI1206 03:06:21.463831 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1206 03:06:21.463872 6274 ovn.go:134] Ensuring zone local for Pod openshift-kube-controller-manager/kube-controller-manager-crc in node crc\\\\nI1206 03:06:21.463880 6274 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc after 0 failed attempt(s)\\\\nI1206 03:06:21.463661 6274 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-s2sg4\\\\nI1206 03:06:21.463849 6274 services_controller.go:360] Finished syncing service machine-api-operator on namespace openshift-machine-api for network=default : 1.511861ms\\\\nF1206 03:06:21.463457 6274 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, han\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.473319 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.480173 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.480211 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.480223 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.480242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.480256 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.489306 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.502687 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:47Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.583334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.583575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.583585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.583601 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.583611 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.686733 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.687185 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.687258 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.687349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.687749 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.791144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.791197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.791213 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.791233 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.791244 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.895709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.896136 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.896229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.896330 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:47 crc kubenswrapper[4801]: I1206 03:06:47.896434 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:47Z","lastTransitionTime":"2025-12-06T03:06:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.000747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.000812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.000826 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.000847 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.000859 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.119467 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.119515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.119528 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.119546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.119561 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.211635 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:48 crc kubenswrapper[4801]: E1206 03:06:48.211833 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.222814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.222867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.222878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.222900 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.222912 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.325402 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.325451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.325463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.325480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.325491 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.427950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.427992 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.428009 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.428031 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.428043 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.530571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.530618 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.530629 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.530647 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.530659 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.637864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.637922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.638098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.638994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.639014 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.742619 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.742699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.742735 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.742803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.742831 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.846420 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.846477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.846497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.846523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.846542 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.949561 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.949637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.949660 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.949694 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:48 crc kubenswrapper[4801]: I1206 03:06:48.949715 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:48Z","lastTransitionTime":"2025-12-06T03:06:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.060245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.060307 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.060321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.060344 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.060359 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.164907 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.165004 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.165023 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.165045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.165060 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.212139 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.212180 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.212821 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:49 crc kubenswrapper[4801]: E1206 03:06:49.213012 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:49 crc kubenswrapper[4801]: E1206 03:06:49.212917 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:49 crc kubenswrapper[4801]: E1206 03:06:49.213443 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.268431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.268527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.268544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.268564 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.268578 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.371584 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.371641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.371652 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.371671 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.371681 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.476422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.476477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.476494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.476520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.476538 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.579606 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.579653 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.579662 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.579678 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.579689 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.683151 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.683189 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.683197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.683212 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.683224 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.786689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.787244 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.787396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.787535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.787672 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.891464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.891523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.891538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.891559 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.891572 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.995313 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.995379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.995399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.995425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:49 crc kubenswrapper[4801]: I1206 03:06:49.995440 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:49Z","lastTransitionTime":"2025-12-06T03:06:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.109002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.109055 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.109066 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.109084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.109099 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.211468 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:50 crc kubenswrapper[4801]: E1206 03:06:50.211681 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.211730 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.211794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.211804 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.211824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.211840 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.314213 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.314267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.314278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.314298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.314494 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.417419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.417518 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.417549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.417589 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.417614 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.520377 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.520427 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.520439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.520486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.520502 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.623558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.623610 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.623641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.623664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.623678 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.726259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.726331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.726342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.726362 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.726373 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.829562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.829615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.829627 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.829648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.829661 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.932502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.932536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.932545 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.932559 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:50 crc kubenswrapper[4801]: I1206 03:06:50.932569 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:50Z","lastTransitionTime":"2025-12-06T03:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.035720 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.035796 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.035835 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.035861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.035873 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.141120 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.141203 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.141220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.141241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.141260 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.212075 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.212149 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.212291 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:51 crc kubenswrapper[4801]: E1206 03:06:51.212478 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:51 crc kubenswrapper[4801]: E1206 03:06:51.212646 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:51 crc kubenswrapper[4801]: E1206 03:06:51.213082 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.244129 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.244181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.244193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.244211 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.244236 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.347479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.347530 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.347540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.347557 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.347568 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.450152 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.450476 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.450540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.450626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.450715 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.553695 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.553997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.554111 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.554202 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.554278 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.656619 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.656663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.656674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.656692 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.656705 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.759741 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.759801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.759813 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.759831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.759843 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.861835 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.862156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.862217 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.862288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.862355 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.965431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.965473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.965482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.965496 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:51 crc kubenswrapper[4801]: I1206 03:06:51.965507 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:51Z","lastTransitionTime":"2025-12-06T03:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.068823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.069341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.069577 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.069847 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.070081 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.173950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.174469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.174879 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.175114 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.175298 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.211615 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.212300 4801 scope.go:117] "RemoveContainer" containerID="12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28" Dec 06 03:06:52 crc kubenswrapper[4801]: E1206 03:06:52.212495 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" Dec 06 03:06:52 crc kubenswrapper[4801]: E1206 03:06:52.212317 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.231806 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.246378 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.259458 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.270914 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.278112 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.278150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.278162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.278181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.278196 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.283823 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.296844 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.309570 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.322358 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.333689 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.352186 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.367013 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.378272 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.380704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.380742 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.380776 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.380793 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.380802 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.388284 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.398948 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.412321 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.424395 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.434728 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.449645 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:52Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.483316 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.483356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.483366 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.483382 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.483393 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.585356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.585407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.585419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.585437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.585451 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.687668 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.687704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.687715 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.687730 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.687775 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.790480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.790550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.790568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.790599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.790621 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.894228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.894291 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.894304 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.894321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.894335 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.998636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.998715 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.998738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.998796 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:52 crc kubenswrapper[4801]: I1206 03:06:52.998821 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:52Z","lastTransitionTime":"2025-12-06T03:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.102224 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.102269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.102281 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.102297 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.102308 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.205995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.206037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.206046 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.206061 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.206070 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.214346 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:53 crc kubenswrapper[4801]: E1206 03:06:53.214478 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.214649 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:53 crc kubenswrapper[4801]: E1206 03:06:53.214809 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.214952 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:53 crc kubenswrapper[4801]: E1206 03:06:53.215023 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.309321 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.309362 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.309374 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.309391 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.309403 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.412396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.412474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.412487 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.412535 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.412550 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.515567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.515634 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.515648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.515666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.515678 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.618350 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.618397 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.618408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.618425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.618436 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.720630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.720670 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.720681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.720698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.720709 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.823292 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.823342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.823351 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.823369 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.823380 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.926264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.926320 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.926331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.926352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:53 crc kubenswrapper[4801]: I1206 03:06:53.926365 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:53Z","lastTransitionTime":"2025-12-06T03:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.029236 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.029284 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.029293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.029310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.029321 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.132130 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.132445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.132570 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.132648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.132712 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.211996 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:54 crc kubenswrapper[4801]: E1206 03:06:54.212462 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.235928 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.235971 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.235980 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.235996 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.236019 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.338675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.339041 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.339106 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.339174 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.339232 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.442228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.442296 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.442308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.442326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.442339 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.545332 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.545387 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.545399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.545419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.545433 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.648148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.648247 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.648261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.648278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.648290 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.750530 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.750578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.750587 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.750604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.750614 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.853455 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.853500 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.853511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.853536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.853547 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.956515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.956553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.956562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.956576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:54 crc kubenswrapper[4801]: I1206 03:06:54.956586 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:54Z","lastTransitionTime":"2025-12-06T03:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.058894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.058940 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.058952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.058973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.058986 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.161729 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.161812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.161832 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.161857 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.161881 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.211428 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.211485 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:55 crc kubenswrapper[4801]: E1206 03:06:55.211649 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.211779 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:55 crc kubenswrapper[4801]: E1206 03:06:55.211801 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:55 crc kubenswrapper[4801]: E1206 03:06:55.212030 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.264266 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.264331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.264341 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.264357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.264366 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.367444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.367520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.367549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.367573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.367588 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.471060 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.471144 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.471158 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.471175 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.471207 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.573836 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.573883 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.573896 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.573915 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.573927 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.675889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.675932 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.675942 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.675961 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.675975 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.778460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.778550 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.778562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.778589 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.778602 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.881172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.881225 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.881234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.881250 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.881260 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.983469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.983527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.983538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.983557 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:55 crc kubenswrapper[4801]: I1206 03:06:55.983568 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:55Z","lastTransitionTime":"2025-12-06T03:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.086198 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.086259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.086268 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.086287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.086298 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.189433 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.189622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.189635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.189655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.189667 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.211779 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.211969 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.293102 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.293154 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.293166 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.293186 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.293201 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.395937 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.395994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.396006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.396024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.396036 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.404582 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.404630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.404646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.404666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.404682 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.421556 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:56Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.426016 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.426064 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.426074 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.426092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.426102 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.439409 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:56Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.444323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.444367 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.444381 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.444398 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.444410 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.457724 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:56Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.462678 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.462722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.462737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.462773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.462791 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.481629 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:56Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.486616 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.486669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.486682 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.486703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.486718 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.507797 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:56Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:56 crc kubenswrapper[4801]: E1206 03:06:56.507921 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.509797 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.509834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.509847 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.509868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.509881 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.612082 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.612152 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.612164 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.612188 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.612202 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.714347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.714381 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.714392 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.714407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.714419 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.817111 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.817172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.817186 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.817206 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.817220 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.920038 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.920099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.920111 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.920132 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:56 crc kubenswrapper[4801]: I1206 03:06:56.920148 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:56Z","lastTransitionTime":"2025-12-06T03:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.024193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.024260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.024279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.024307 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.024327 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.031855 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:57 crc kubenswrapper[4801]: E1206 03:06:57.032015 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:57 crc kubenswrapper[4801]: E1206 03:06:57.032086 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:07:29.032067 +0000 UTC m=+102.154674582 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.129353 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.129428 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.129450 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.129480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.129504 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.211874 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.211922 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.211951 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:57 crc kubenswrapper[4801]: E1206 03:06:57.212041 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:57 crc kubenswrapper[4801]: E1206 03:06:57.212134 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:57 crc kubenswrapper[4801]: E1206 03:06:57.212238 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.222851 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.232472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.232546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.232563 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.232593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.232611 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.239957 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.259904 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.275091 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.303515 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.323280 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.335699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.335734 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.335765 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.335785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.335799 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.337273 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.348423 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.363457 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.378441 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.395230 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.408961 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.424235 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.456448 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.457616 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.457646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.457656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.457674 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.457685 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.483392 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.495777 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.507064 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.519292 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.561006 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.561053 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.561065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.561084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.561098 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.663036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.663092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.663102 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.663121 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.663131 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.690114 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/0.log" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.690165 4801 generic.go:334] "Generic (PLEG): container finished" podID="9695c5a7-610b-4c76-aa6f-b4f06f20823e" containerID="a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33" exitCode=1 Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.690201 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerDied","Data":"a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.690602 4801 scope.go:117] "RemoveContainer" containerID="a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.704043 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.718172 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"2025-12-06T03:06:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a\\\\n2025-12-06T03:06:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a to /host/opt/cni/bin/\\\\n2025-12-06T03:06:11Z [verbose] multus-daemon started\\\\n2025-12-06T03:06:11Z [verbose] Readiness Indicator file check\\\\n2025-12-06T03:06:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.730078 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.749384 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.762199 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.766358 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.766417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.766431 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.766456 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.766470 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.773809 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.785451 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.800985 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.814082 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.835157 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.847238 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.862220 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.868348 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.868379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.868389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.868403 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.868413 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.881096 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.896640 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.910920 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.922543 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.935593 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.949875 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:57Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.970867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.970922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.970934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.970951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:57 crc kubenswrapper[4801]: I1206 03:06:57.970965 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:57Z","lastTransitionTime":"2025-12-06T03:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.073830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.073866 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.073875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.073888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.073900 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.176346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.176654 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.176743 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.176871 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.176946 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.212002 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:06:58 crc kubenswrapper[4801]: E1206 03:06:58.212521 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.279080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.279122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.279134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.279155 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.279167 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.382373 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.382426 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.382437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.382455 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.382466 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.484442 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.484517 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.484528 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.484542 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.484553 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.586565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.586603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.586612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.586628 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.586638 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.689645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.689704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.689716 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.689738 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.689778 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.699252 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/0.log" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.699349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerStarted","Data":"bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.713189 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.729769 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.743099 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"2025-12-06T03:06:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a\\\\n2025-12-06T03:06:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a to /host/opt/cni/bin/\\\\n2025-12-06T03:06:11Z [verbose] multus-daemon started\\\\n2025-12-06T03:06:11Z [verbose] Readiness Indicator file check\\\\n2025-12-06T03:06:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.753009 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.773371 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.789331 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.792969 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.793025 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.793036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.793057 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.793069 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.805527 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.817267 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.827379 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.839181 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.851861 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.861689 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.872601 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.891089 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.895295 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.895319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.895328 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.895342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.895352 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.904603 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.917371 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.929927 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.944333 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:06:58Z is after 2025-08-24T17:21:41Z" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.998162 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.998204 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.998214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.998229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:58 crc kubenswrapper[4801]: I1206 03:06:58.998238 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:58Z","lastTransitionTime":"2025-12-06T03:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.100407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.100449 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.100460 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.100475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.100485 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.202512 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.202558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.202567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.202591 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.202635 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.211866 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.211866 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.211954 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:06:59 crc kubenswrapper[4801]: E1206 03:06:59.212044 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:06:59 crc kubenswrapper[4801]: E1206 03:06:59.212107 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:06:59 crc kubenswrapper[4801]: E1206 03:06:59.212202 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.304955 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.305005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.305017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.305037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.305050 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.407845 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.407892 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.407904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.407922 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.407936 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.511229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.511282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.511292 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.511309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.511320 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.614157 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.614207 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.614221 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.614239 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.614251 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.716319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.716370 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.716386 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.716407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.716427 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.819495 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.819584 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.819613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.819639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.819659 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.921639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.921681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.921690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.921705 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:06:59 crc kubenswrapper[4801]: I1206 03:06:59.921715 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:06:59Z","lastTransitionTime":"2025-12-06T03:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.024529 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.024569 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.024579 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.024596 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.024606 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.128287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.128355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.128379 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.128410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.128433 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.212134 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:00 crc kubenswrapper[4801]: E1206 03:07:00.212359 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.231014 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.231069 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.231088 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.231113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.231131 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.333920 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.334255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.334280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.334378 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.334402 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.437560 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.437615 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.437626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.437644 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.437656 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.540878 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.540935 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.540951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.540975 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.540996 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.643575 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.643621 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.643638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.643658 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.643672 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.747287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.747342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.747355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.747418 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.747432 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.849931 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.849975 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.849985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.850002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.850012 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.952856 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.952894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.952903 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.952919 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:00 crc kubenswrapper[4801]: I1206 03:07:00.952930 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:00Z","lastTransitionTime":"2025-12-06T03:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.055343 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.055393 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.055405 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.055425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.055440 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.158520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.158573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.158588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.158608 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.158623 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.211680 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.211732 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.211729 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:01 crc kubenswrapper[4801]: E1206 03:07:01.211871 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:01 crc kubenswrapper[4801]: E1206 03:07:01.211945 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:01 crc kubenswrapper[4801]: E1206 03:07:01.212044 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.262039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.262087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.262097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.262115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.262125 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.364445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.364503 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.364515 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.364563 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.364576 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.466877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.466936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.466952 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.466977 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.466993 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.570210 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.570277 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.570297 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.570322 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.570340 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.673044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.673128 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.673150 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.673177 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.673200 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.775733 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.775795 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.775808 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.775823 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.775833 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.879405 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.879474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.879493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.879520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.879538 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.983122 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.983179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.983188 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.983201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:01 crc kubenswrapper[4801]: I1206 03:07:01.983211 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:01Z","lastTransitionTime":"2025-12-06T03:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.085888 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.085963 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.085980 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.086005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.086022 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.189461 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.189531 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.189551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.189581 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.189598 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.211369 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:02 crc kubenswrapper[4801]: E1206 03:07:02.211554 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.292907 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.292957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.292967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.292983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.292998 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.396523 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.396574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.396586 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.396608 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.396623 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.499023 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.499065 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.499073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.499089 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.499099 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.602139 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.602220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.602243 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.602275 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.602297 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.705178 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.705288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.705315 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.705346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.705368 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.808957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.809018 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.809030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.809053 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.809068 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.912848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.912915 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.912935 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.912975 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:02 crc kubenswrapper[4801]: I1206 03:07:02.913001 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:02Z","lastTransitionTime":"2025-12-06T03:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.019875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.020135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.020151 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.020177 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.020197 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.123698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.123740 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.123749 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.123786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.123796 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.211949 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.212120 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.212270 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:03 crc kubenswrapper[4801]: E1206 03:07:03.213206 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:03 crc kubenswrapper[4801]: E1206 03:07:03.213348 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.213608 4801 scope.go:117] "RemoveContainer" containerID="12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28" Dec 06 03:07:03 crc kubenswrapper[4801]: E1206 03:07:03.212592 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.226918 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.226973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.226991 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.227015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.227035 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.330385 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.330456 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.330470 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.330493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.330507 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.435225 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.435267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.435277 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.435302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.435314 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.538497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.538554 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.538585 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.538610 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.538621 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.641182 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.641226 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.641242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.641260 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.641270 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.719109 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/2.log" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.723299 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.724041 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.744376 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.744447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.744461 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.744485 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.744499 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.749689 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.774003 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.793901 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.808894 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.824166 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.840456 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.847391 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.847493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.847512 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.847536 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.847555 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.861513 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.877820 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.896735 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.915782 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.934205 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.949952 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.951349 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.951474 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.951493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.951511 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.951525 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:03Z","lastTransitionTime":"2025-12-06T03:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.975323 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:03 crc kubenswrapper[4801]: I1206 03:07:03.993919 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:03Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.022011 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:04Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.048005 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:04Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.054039 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.054073 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.054084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.054104 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.054117 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.062129 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"2025-12-06T03:06:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a\\\\n2025-12-06T03:06:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a to /host/opt/cni/bin/\\\\n2025-12-06T03:06:11Z [verbose] multus-daemon started\\\\n2025-12-06T03:06:11Z [verbose] Readiness Indicator file check\\\\n2025-12-06T03:06:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:04Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.073346 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:04Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.156632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.156712 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.156724 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.156741 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.156825 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.211550 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:04 crc kubenswrapper[4801]: E1206 03:07:04.211706 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.259088 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.259135 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.259145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.259163 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.259177 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.361889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.361925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.361934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.361948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.361958 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.464444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.464493 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.464504 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.464520 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.464533 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.567220 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.567287 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.567305 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.567333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.567356 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.669978 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.670023 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.670037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.670054 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.670065 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.772570 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.772635 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.772661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.772686 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.772706 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.875547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.875643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.875672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.875703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.875782 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.979044 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.979116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.979154 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.979179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:04 crc kubenswrapper[4801]: I1206 03:07:04.979195 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:04Z","lastTransitionTime":"2025-12-06T03:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.081991 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.082036 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.082045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.082060 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.082072 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.185033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.185098 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.185116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.185138 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.185154 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.211660 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.211675 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:05 crc kubenswrapper[4801]: E1206 03:07:05.211875 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.211680 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:05 crc kubenswrapper[4801]: E1206 03:07:05.212051 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:05 crc kubenswrapper[4801]: E1206 03:07:05.212181 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.321704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.321803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.321818 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.321843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.321882 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.424045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.424115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.424131 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.424147 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.424172 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.526916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.526983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.527003 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.527030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.527047 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.629624 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.629666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.629675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.629689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.629700 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.731306 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/3.log" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.731966 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/2.log" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.732033 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.732513 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.732537 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.732562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.732576 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.734822 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e" exitCode=1 Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.734869 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.734913 4801 scope.go:117] "RemoveContainer" containerID="12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.736116 4801 scope.go:117] "RemoveContainer" containerID="9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e" Dec 06 03:07:05 crc kubenswrapper[4801]: E1206 03:07:05.736311 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.751829 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.763013 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"2025-12-06T03:06:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a\\\\n2025-12-06T03:06:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a to /host/opt/cni/bin/\\\\n2025-12-06T03:06:11Z [verbose] multus-daemon started\\\\n2025-12-06T03:06:11Z [verbose] Readiness Indicator file check\\\\n2025-12-06T03:06:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.772066 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.788660 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.804480 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.818070 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.828706 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.836524 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.836636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.836653 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.836675 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.836690 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.842715 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.857013 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.871286 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.885878 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.898640 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.910727 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.923809 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.938883 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.938944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.938954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.938971 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.938985 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:05Z","lastTransitionTime":"2025-12-06T03:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.942128 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.956474 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.980257 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:07:05Z\\\",\\\"message\\\":\\\"default: []services.lbConfig(nil)\\\\nI1206 03:07:04.427501 6839 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1206 03:07:04.427524 6839 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start defau\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:05 crc kubenswrapper[4801]: I1206 03:07:05.998785 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:05Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.045186 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.045228 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.045237 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.045252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.045265 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.147717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.147775 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.147786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.147802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.147812 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.212190 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.212357 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.250557 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.250622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.250639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.250663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.250681 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.352792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.352831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.352841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.352860 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.352871 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.455525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.455565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.455573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.455590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.455599 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.558509 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.558567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.558579 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.558598 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.558609 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.635700 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.635787 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.635798 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.635814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.635856 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.650168 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.653930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.653983 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.653995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.654017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.654028 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.667073 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.671818 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.671861 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.671873 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.671893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.671907 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.688788 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.699501 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.699553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.699568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.699590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.699618 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.712421 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.717123 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.717179 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.717191 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.717209 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.717222 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.734722 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:06Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:06 crc kubenswrapper[4801]: E1206 03:07:06.734924 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.739056 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.739101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.739113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.739132 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.739147 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.740622 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/3.log" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.841991 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.842057 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.842070 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.842090 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.842104 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.943953 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.943999 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.944012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.944029 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:06 crc kubenswrapper[4801]: I1206 03:07:06.944042 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:06Z","lastTransitionTime":"2025-12-06T03:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.046497 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.046551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.046562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.046580 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.046593 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.149419 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.149491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.149528 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.149547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.149557 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.212151 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.212317 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:07 crc kubenswrapper[4801]: E1206 03:07:07.212336 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.212419 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:07 crc kubenswrapper[4801]: E1206 03:07:07.213085 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:07 crc kubenswrapper[4801]: E1206 03:07:07.213180 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.234122 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8a57dd71-1caf-4193-9a92-2fd1f871832a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"le observer\\\\nW1206 03:06:04.509532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1206 03:06:04.509790 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1206 03:06:04.512009 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1896008697/tls.crt::/tmp/serving-cert-1896008697/tls.key\\\\\\\"\\\\nI1206 03:06:05.164955 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1206 03:06:05.167445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1206 03:06:05.167510 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1206 03:06:05.167891 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1206 03:06:05.167928 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1206 03:06:05.173201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1206 03:06:05.173246 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1206 03:06:05.173289 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173314 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1206 03:06:05.173338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1206 03:06:05.173358 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1206 03:06:05.173378 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1206 03:06:05.173397 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1206 03:06:05.174385 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.246511 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2845b9f495b0f04368a404323dff7772749f5e14e27413f1cdf64c6a4681582\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.251298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.251333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.251343 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.251360 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.251373 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.258353 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-s2sg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b06bf6d5-3516-41cd-b649-1ad8521969c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0331739e5dcb1939c5465cb5101df279e74e8f239b5eac20e1f34fdf0fb61ee1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qf62\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-s2sg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.288636 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1887628-2b02-4529-be74-3ee783531329\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c7694c7d813184a232e3c8d0a6a97d6277a6d1b2827ac4feb00c3d2e037d16e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cf8360a817efaf4e7630b6d03cd092bc993a4cd8e22cda98dc2ee980a758c5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://55d0bb5818dcf4116a2cab7211eb1a23324dd5dcf5747b244ab028d7683ed344\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbeb40c8ba1a69b39ee128eba8b351855f27460b41621e1c5fb4c66aa3ec21f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff500c227141ed045d2c02e2facb6362ed893decfd3740645d7e2409bed176cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d30d28e0da99cd5a6909fa585ff9b3eb32868fd159a0c3116faeccf8dbe57b78\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecfa1ac526267283598a6b87f9d1e9abca520019df047876989fb0d74fa327a0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d283c2dc51db96d4d8ad7408ac391e9f8e683d1f5c0233c7469734042ef1f3e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.302397 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dfbc51c4659d6587bc36c78103156d1927b085ee91b53a8e1653eb3fa8e06b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cac1db0e69b8fbf928e7d1f9decb70e640481f6b0c547864ace88aed4c288e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.314729 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54a0ee06-a8e7-4d96-844f-d0dd3c90e900\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a397151c1b491b5a236c44803ef78a131e2c68bf27201306cf597a562bfa769c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-twbsq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mjmtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.327816 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e05c2b2b-91fd-47d5-8af2-7e79eabe1585\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://350be4ca88e2cfa0faaa860bf6a0d55d5d559527ef7545fec2046ae34b169e5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fdc7aa6e92be108106dbef39c22d7e05ff31e83c3c5b966a1c49e89e41d186a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmzwc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-md5jx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.339704 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"134354b0-1613-4536-aaf8-4e5ad12705f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vssxc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-wpnbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.350784 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.353801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.353833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.353868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.353886 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.353899 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.361338 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"255a8708-d8e6-4297-97f2-2ccba66e4037\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e38b2f4d26c4b2f1c9144b528dc2314e75d41265ed18e7ef214830c5b685736e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ecc52a76f83d7bf9288c34a0f99788bd30fe8653288e0c6345b121054dcfd50c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://559329f4896d232f9479a57ba8ce001ad8e411ce010035f603ae2c2ed7e4a406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c49418a044aef436001ccd95741ae0071719b2edc316d7ea0fe7377d8164b927\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.372348 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c8f9263c221329dcf8fd4f0627ced1377d5e9e62adbab89e6e0fd92a4725ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.383869 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.409068 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cd76211-e203-4b5b-98b0-102d3d67315d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12c615f34f5cbd9a719611df010bd652c2c142578f0e3d21e57749141d618f28\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:35Z\\\",\\\"message\\\":\\\"n-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1206 03:06:35.256862 6486 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1206 03:06:35.257127 6486 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-cluster-version/cluster-version-operator]} name:Service_openshift-cluster-version/cluster-version-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.182:9099:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {61d39e4d-21a9-4387-9a2b-fa4ad14792e2}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1206 03:06:35.257220 6486 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:07:05Z\\\",\\\"message\\\":\\\"default: []services.lbConfig(nil)\\\\nI1206 03:07:04.427501 6839 services_controller.go:451] Built service openshift-etcd/etcd cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd/etcd_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd/etcd\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:2379, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.253\\\\\\\", Port:9979, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF1206 03:07:04.427524 6839 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start defau\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:07:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qs2f9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8qjvm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.425409 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dcvff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"702cb807-2b51-4192-bf87-5df8398a8cf2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e37b2531ee78a13ac0ca08218b37c0c8a52edcf7b42d17ba442ee3be1c20693\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f1a175e507d086609a6e9b0d8b5d72bb923c41744eee4f9fc095d1b20ed2141\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c98ed9556e8d808aba9007085fdaba7ac4fd780f6f6606e79395d35287970fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://233a5030fdb2cac24b97c835546f4f9886dd1cbaf7dc35bfe705450cbc44e5e1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://baf58741783d40f210efbc7301299e2a8d38163e20dd5fa0a2bee98cd76bedef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c5fa509cb95eced6165b05e193890d13c2b268637488ddc412c577adc062e794\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c3453e32e5628c6a32569f98572a96532f04567808d88da1d50d9aa86358ca3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:06:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fxf65\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dcvff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.440100 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea0a2110-e227-47d7-8503-80ceaf5300e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://795d54cdeec2390a76473c381e5650dba8aad4b93ab4f1d3a6a1eeb26400d93b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfbce9362858085e4312e687ac730bd21bfb864d1908ebb1435a7646e2ae57f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4256a93f216e4983fe5be0d280d7c001d3fc2fbb02af6903b0e60ebcfee855aa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.453099 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.456865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.456905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.456916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.456933 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.456945 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.465931 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"2025-12-06T03:06:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a\\\\n2025-12-06T03:06:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a to /host/opt/cni/bin/\\\\n2025-12-06T03:06:11Z [verbose] multus-daemon started\\\\n2025-12-06T03:06:11Z [verbose] Readiness Indicator file check\\\\n2025-12-06T03:06:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.477073 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:07Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.559214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.559271 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.559288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.559342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.559362 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.662134 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.662176 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.662188 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.662206 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.662219 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.764695 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.764737 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.764747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.764783 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.764794 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.868411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.868472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.868488 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.868507 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.868520 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.970930 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.970987 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.970997 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.971015 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:07 crc kubenswrapper[4801]: I1206 03:07:07.971032 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:07Z","lastTransitionTime":"2025-12-06T03:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.074173 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.074242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.074257 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.074278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.074292 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.177448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.177526 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.177537 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.177603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.177619 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.211684 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:08 crc kubenswrapper[4801]: E1206 03:07:08.211863 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.279545 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.279609 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.279620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.279683 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.279696 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.382551 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.382623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.382639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.382657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.382668 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.485747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.485829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.485843 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.485864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.485878 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.589405 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.589453 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.589463 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.589482 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.589494 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.692309 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.692356 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.692367 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.692387 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.692399 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.795668 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.795747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.795794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.795822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.795841 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.898148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.898226 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.898241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.898264 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.898285 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:08Z","lastTransitionTime":"2025-12-06T03:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:08 crc kubenswrapper[4801]: I1206 03:07:08.970533 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:07:08 crc kubenswrapper[4801]: E1206 03:07:08.970795 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:12.970744242 +0000 UTC m=+146.093351834 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.001127 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.001180 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.001194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.001215 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.001229 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.072626 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.072710 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.072745 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.072844 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.072970 4801 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073081 4801 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073146 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073179 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073096 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073201 4801 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073220 4801 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073235 4801 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073116 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.073093306 +0000 UTC m=+146.195700898 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073334 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.07328101 +0000 UTC m=+146.195888592 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073378 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.073362022 +0000 UTC m=+146.195969785 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.073413 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.073399623 +0000 UTC m=+146.196007215 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.103538 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.103590 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.103603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.103622 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.103636 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.206824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.206868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.206880 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.206899 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.206913 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.212461 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.212530 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.212590 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.212461 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.212668 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:09 crc kubenswrapper[4801]: E1206 03:07:09.213003 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.309601 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.309661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.309680 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.309704 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.309721 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.413342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.413400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.413417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.413439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.413456 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.515852 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.515893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.515905 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.515921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.515934 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.898281 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.898335 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.898346 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.898365 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:09 crc kubenswrapper[4801]: I1206 03:07:09.898380 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:09Z","lastTransitionTime":"2025-12-06T03:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.001410 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.001507 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.001522 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.001546 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.001570 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.104574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.104637 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.104651 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.104672 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.104687 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.206907 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.206947 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.206958 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.206976 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.206990 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.211293 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:10 crc kubenswrapper[4801]: E1206 03:07:10.211462 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.224253 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.310435 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.310836 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.311030 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.311168 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.311297 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.414101 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.414155 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.414165 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.414180 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.414190 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.517288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.517398 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.517421 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.517447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.517472 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.620252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.620319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.620333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.620352 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.620383 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.722994 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.723062 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.723071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.723087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.723098 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.825525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.825568 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.825578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.825592 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.825620 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.928582 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.928638 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.928647 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.928667 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:10 crc kubenswrapper[4801]: I1206 03:07:10.928687 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:10Z","lastTransitionTime":"2025-12-06T03:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.031302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.031338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.031347 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.031364 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.031375 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.134412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.134494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.134533 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.134553 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.134567 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.211834 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.211980 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:11 crc kubenswrapper[4801]: E1206 03:07:11.212017 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.212073 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:11 crc kubenswrapper[4801]: E1206 03:07:11.212198 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:11 crc kubenswrapper[4801]: E1206 03:07:11.212237 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.237248 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.237298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.237312 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.237333 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.237346 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.340173 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.340219 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.340231 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.340251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.340263 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.443427 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.443494 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.443514 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.443547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.443568 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.547527 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.547597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.547623 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.547657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.547682 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.650791 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.650948 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.650970 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.651000 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.651021 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.754684 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.754781 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.754794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.754814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.754826 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.858465 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.858533 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.858548 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.858574 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.858593 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.961245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.961280 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.961288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.961304 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:11 crc kubenswrapper[4801]: I1206 03:07:11.961313 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:11Z","lastTransitionTime":"2025-12-06T03:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.064563 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.064620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.064634 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.064657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.064672 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.168712 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.168802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.168812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.168835 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.168851 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.212406 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:12 crc kubenswrapper[4801]: E1206 03:07:12.212618 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.272072 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.272127 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.272143 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.272166 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.272184 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.375780 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.375979 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.375995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.376115 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.376131 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.478897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.478962 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.478972 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.478995 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.479009 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.582198 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.582252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.582267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.582288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.582302 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.684516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.684556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.684565 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.684579 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.684590 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.786490 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.786534 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.786544 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.786558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.786567 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.888882 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.888923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.888934 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.888950 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.888959 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.991017 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.991067 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.991075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.991092 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:12 crc kubenswrapper[4801]: I1206 03:07:12.991105 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:12Z","lastTransitionTime":"2025-12-06T03:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.093606 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.093643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.093653 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.093671 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.093685 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.195998 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.196058 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.196079 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.196106 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.196131 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.212325 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.212396 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.212345 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:13 crc kubenswrapper[4801]: E1206 03:07:13.212814 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:13 crc kubenswrapper[4801]: E1206 03:07:13.212943 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:13 crc kubenswrapper[4801]: E1206 03:07:13.213270 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.298667 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.298702 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.298713 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.298728 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.298738 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.401785 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.401833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.401848 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.401867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.401878 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.504701 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.504789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.504806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.504828 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.504850 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.608052 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.608126 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.608148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.608181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.608205 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.711661 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.711726 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.711747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.711831 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.711853 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.815241 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.815288 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.815306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.815331 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.815349 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.918248 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.918298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.918308 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.918326 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:13 crc kubenswrapper[4801]: I1206 03:07:13.918341 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:13Z","lastTransitionTime":"2025-12-06T03:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.021525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.021612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.021630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.021659 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.021677 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.124748 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.124802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.124810 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.124826 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.124839 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.211411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:14 crc kubenswrapper[4801]: E1206 03:07:14.211612 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.227415 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.227458 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.227468 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.227483 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.227496 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.329742 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.329806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.329815 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.329832 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.329845 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.433822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.433868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.433880 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.433897 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.433908 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.537206 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.537261 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.537272 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.537290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.537303 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.640707 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.640770 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.640781 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.640796 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.640805 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.743789 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.743838 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.743851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.743868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.743883 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.846750 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.846822 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.846837 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.846858 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.846872 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.950100 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.950156 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.950169 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.950193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:14 crc kubenswrapper[4801]: I1206 03:07:14.950206 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:14Z","lastTransitionTime":"2025-12-06T03:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.054293 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.054371 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.054394 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.054422 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.054452 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.157151 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.157245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.157273 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.157307 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.157331 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.212211 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.212247 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.212245 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:15 crc kubenswrapper[4801]: E1206 03:07:15.212453 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:15 crc kubenswrapper[4801]: E1206 03:07:15.212599 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:15 crc kubenswrapper[4801]: E1206 03:07:15.212751 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.259949 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.260000 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.260010 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.260032 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.260045 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.362896 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.362966 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.362990 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.363021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.363040 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.466366 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.466425 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.466452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.466475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.466490 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.570501 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.570570 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.570588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.570612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.570628 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.674641 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.674681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.674690 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.674708 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.674720 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.777961 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.778012 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.778024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.778048 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.778065 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.882175 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.882234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.882245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.882267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.882282 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.991284 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.991357 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.991375 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.991401 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:15 crc kubenswrapper[4801]: I1206 03:07:15.991420 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:15Z","lastTransitionTime":"2025-12-06T03:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.095377 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.095421 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.095454 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.095478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.095495 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.198571 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.198629 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.198643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.198664 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.198677 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.212021 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:16 crc kubenswrapper[4801]: E1206 03:07:16.212174 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.302201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.302269 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.302282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.302304 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.302316 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.405060 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.405099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.405110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.405128 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.405138 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.508747 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.508846 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.508906 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.508935 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.508953 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.612722 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.612850 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.612867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.612896 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.612916 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.717165 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.717232 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.717251 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.717278 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.717397 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.821502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.821620 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.821655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.821701 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.821730 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.924119 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.924184 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.924201 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.924227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:16 crc kubenswrapper[4801]: I1206 03:07:16.924246 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:16Z","lastTransitionTime":"2025-12-06T03:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.027988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.028091 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.028118 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.028148 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.028169 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.127773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.127833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.127845 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.127868 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.127884 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.149912 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.156445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.156499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.156517 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.156545 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.156563 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.172294 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.177803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.177869 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.177887 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.177916 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.177935 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.195873 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.201086 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.201172 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.201195 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.201229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.201251 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.212121 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.212179 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.212358 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.212420 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.212566 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.212721 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.216108 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.222478 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.222604 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.222829 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.222898 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.222929 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.231142 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"db4bb4ef-e362-44f3-9a5e-27c66adbba64\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cce160b311f3f235e8920ddf568101fde66fb469405479f12d8454ec00883399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c744cb5733f92c5da5b8e294bf1fe18e7dad93bbfbf3ad84dd493fe13a40bfd1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c744cb5733f92c5da5b8e294bf1fe18e7dad93bbfbf3ad84dd493fe13a40bfd1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T03:05:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T03:05:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:05:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.246700 4801 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T03:07:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"abfd5160-9396-4d4e-928e-b20d5ecf73f0\\\",\\\"systemUUID\\\":\\\"0b4685bb-fe54-4148-bf81-6b341147ef19\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: E1206 03:07:17.252111 4801 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.255013 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.257286 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.257336 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.257370 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.257391 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.257404 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.273306 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4gxwt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9695c5a7-610b-4c76-aa6f-b4f06f20823e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T03:06:57Z\\\",\\\"message\\\":\\\"2025-12-06T03:06:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a\\\\n2025-12-06T03:06:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7a3fbf67-bfdb-47b3-8c92-b6b1f4b8f70a to /host/opt/cni/bin/\\\\n2025-12-06T03:06:11Z [verbose] multus-daemon started\\\\n2025-12-06T03:06:11Z [verbose] Readiness Indicator file check\\\\n2025-12-06T03:06:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T03:06:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5bnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4gxwt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.289824 4801 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x2kfc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"524d8648-db2b-432b-959e-068533d1b55d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T03:06:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20414199677f33faa3247f7231827ef0e206944bee579e3da7be07fa86b8873e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T03:06:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b966g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T03:06:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x2kfc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T03:07:17Z is after 2025-08-24T17:21:41Z" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.345306 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.345278307 podStartE2EDuration="1m12.345278307s" podCreationTimestamp="2025-12-06 03:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.344841116 +0000 UTC m=+90.467448698" watchObservedRunningTime="2025-12-06 03:07:17.345278307 +0000 UTC m=+90.467885879" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.345560 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.345553275 podStartE2EDuration="1m9.345553275s" podCreationTimestamp="2025-12-06 03:06:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.324322133 +0000 UTC m=+90.446929725" watchObservedRunningTime="2025-12-06 03:07:17.345553275 +0000 UTC m=+90.468160847" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.363147 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.363197 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.363208 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.363224 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.363235 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.388699 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-s2sg4" podStartSLOduration=68.388674833 podStartE2EDuration="1m8.388674833s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.373918684 +0000 UTC m=+90.496526266" watchObservedRunningTime="2025-12-06 03:07:17.388674833 +0000 UTC m=+90.511282405" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.435075 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podStartSLOduration=68.435047918 podStartE2EDuration="1m8.435047918s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.418475371 +0000 UTC m=+90.541082943" watchObservedRunningTime="2025-12-06 03:07:17.435047918 +0000 UTC m=+90.557655500" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.457026 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-md5jx" podStartSLOduration=67.456998808 podStartE2EDuration="1m7.456998808s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.436154697 +0000 UTC m=+90.558762279" watchObservedRunningTime="2025-12-06 03:07:17.456998808 +0000 UTC m=+90.579606390" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.466524 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.466562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.466576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.466594 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.466608 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.480383 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.480372626 podStartE2EDuration="1m11.480372626s" podCreationTimestamp="2025-12-06 03:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.479901503 +0000 UTC m=+90.602509095" watchObservedRunningTime="2025-12-06 03:07:17.480372626 +0000 UTC m=+90.602980208" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.492744 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.492732782 podStartE2EDuration="43.492732782s" podCreationTimestamp="2025-12-06 03:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.491929991 +0000 UTC m=+90.614537573" watchObservedRunningTime="2025-12-06 03:07:17.492732782 +0000 UTC m=+90.615340364" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.568502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.568543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.568556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.568572 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.568584 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.572885 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dcvff" podStartSLOduration=68.572859449 podStartE2EDuration="1m8.572859449s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:17.572334095 +0000 UTC m=+90.694941667" watchObservedRunningTime="2025-12-06 03:07:17.572859449 +0000 UTC m=+90.695467021" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.671131 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.671208 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.671226 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.671259 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.671276 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.774227 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.774573 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.774583 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.774599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.774631 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.877242 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.877289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.877302 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.877319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.877333 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.980588 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.980630 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.980639 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.980656 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:17 crc kubenswrapper[4801]: I1206 03:07:17.980667 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:17Z","lastTransitionTime":"2025-12-06T03:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.083773 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.083824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.083836 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.083853 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.083865 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.187400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.187464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.187479 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.187502 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.187517 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.211742 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:18 crc kubenswrapper[4801]: E1206 03:07:18.211903 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.290167 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.290219 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.290235 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.290255 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.290266 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.392984 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.393045 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.393057 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.393075 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.393087 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.495904 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.495964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.495981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.496007 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.496027 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.599284 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.599334 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.599345 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.599367 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.599381 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.701860 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.701925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.701936 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.701954 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.701966 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.804411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.804475 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.804489 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.804510 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.804523 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.908748 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.908873 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.908925 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.908964 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:18 crc kubenswrapper[4801]: I1206 03:07:18.908985 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:18Z","lastTransitionTime":"2025-12-06T03:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.012452 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.012540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.012549 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.012569 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.012600 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.115382 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.115437 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.115450 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.115469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.115482 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.211788 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.211810 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.212671 4801 scope.go:117] "RemoveContainer" containerID="9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e" Dec 06 03:07:19 crc kubenswrapper[4801]: E1206 03:07:19.212823 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.212850 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:19 crc kubenswrapper[4801]: E1206 03:07:19.212915 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" Dec 06 03:07:19 crc kubenswrapper[4801]: E1206 03:07:19.212991 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:19 crc kubenswrapper[4801]: E1206 03:07:19.213243 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.219013 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.219064 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.219081 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.219099 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.219112 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.274983 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4gxwt" podStartSLOduration=70.274946646 podStartE2EDuration="1m10.274946646s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:19.272774468 +0000 UTC m=+92.395382040" watchObservedRunningTime="2025-12-06 03:07:19.274946646 +0000 UTC m=+92.397554238" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.287676 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-x2kfc" podStartSLOduration=70.287643551 podStartE2EDuration="1m10.287643551s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:19.285843473 +0000 UTC m=+92.408451075" watchObservedRunningTime="2025-12-06 03:07:19.287643551 +0000 UTC m=+92.410251123" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.300600 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.300382468 podStartE2EDuration="9.300382468s" podCreationTimestamp="2025-12-06 03:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:19.29974151 +0000 UTC m=+92.422349102" watchObservedRunningTime="2025-12-06 03:07:19.300382468 +0000 UTC m=+92.422990040" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.324804 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.324877 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.324889 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.324910 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.324923 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.427207 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.427249 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.427279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.427296 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.427311 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.530111 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.530176 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.530187 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.530200 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.530209 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.633614 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.633669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.633684 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.633703 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.633716 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.736404 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.736462 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.736473 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.736491 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.736503 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.839395 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.839448 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.839464 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.839486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.839502 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.941911 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.941957 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.941967 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.941984 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:19 crc kubenswrapper[4801]: I1206 03:07:19.941997 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:19Z","lastTransitionTime":"2025-12-06T03:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.044486 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.044543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.044556 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.044578 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.044590 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.147865 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.147938 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.147961 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.147988 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.148007 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.212006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:20 crc kubenswrapper[4801]: E1206 03:07:20.212167 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.251298 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.251363 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.251382 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.251411 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.251431 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.354186 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.354266 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.354289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.354319 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.354338 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.457084 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.457136 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.457145 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.457170 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.457181 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.561268 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.561323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.561338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.561364 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.561380 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.664273 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.664325 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.664338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.664359 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.664372 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.768155 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.768229 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.768252 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.768285 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.768309 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.871569 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.871643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.871658 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.871679 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.871696 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.974864 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.974921 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.974942 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.974965 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:20 crc kubenswrapper[4801]: I1206 03:07:20.974975 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:20Z","lastTransitionTime":"2025-12-06T03:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.077827 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.077949 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.077963 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.077981 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.077994 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.181345 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.181399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.181412 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.181432 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.181444 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.212192 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.212188 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.212327 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:21 crc kubenswrapper[4801]: E1206 03:07:21.212431 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:21 crc kubenswrapper[4801]: E1206 03:07:21.212592 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:21 crc kubenswrapper[4801]: E1206 03:07:21.212805 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.283713 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.283760 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.283786 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.283806 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.283817 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.387483 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.387563 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.387586 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.387621 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.387644 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.491612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.491668 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.491679 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.491699 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.491715 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.594391 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.594434 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.594446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.594469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.594480 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.698113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.698170 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.698204 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.698234 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.698250 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.800727 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.800801 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.800814 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.800830 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.800841 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.904444 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.904875 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.905024 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.905177 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:21 crc kubenswrapper[4801]: I1206 03:07:21.905299 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:21Z","lastTransitionTime":"2025-12-06T03:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.008698 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.008753 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.008778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.008794 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.008806 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.111944 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.112232 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.112469 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.112663 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.112861 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.211777 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:22 crc kubenswrapper[4801]: E1206 03:07:22.212384 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.215863 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.216021 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.216097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.216195 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.216271 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.319913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.319985 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.320005 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.320037 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.320058 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.422636 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.423027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.423080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.423116 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.423141 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.526717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.526802 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.526815 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.526834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.526848 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.630728 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.630807 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.630817 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.630834 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.630845 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.734035 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.734078 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.734087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.734104 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.734116 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.838403 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.838477 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.838496 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.838717 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.838742 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.943353 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.943428 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.943447 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.943484 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:22 crc kubenswrapper[4801]: I1206 03:07:22.943505 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:22Z","lastTransitionTime":"2025-12-06T03:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.046817 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.046894 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.046913 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.046943 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.046963 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.150342 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.150841 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.151026 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.151193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.151352 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.211502 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.211610 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.211530 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:23 crc kubenswrapper[4801]: E1206 03:07:23.211730 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:23 crc kubenswrapper[4801]: E1206 03:07:23.212188 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:23 crc kubenswrapper[4801]: E1206 03:07:23.212368 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.255310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.255366 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.255385 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.255407 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.255428 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.358709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.358803 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.358824 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.358851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.358870 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.462439 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.462508 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.462525 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.462558 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.462575 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.565778 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.565833 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.565851 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.565884 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.565905 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.669027 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.669071 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.669090 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.669113 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.669132 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.772323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.772396 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.772414 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.772446 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.772466 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.875709 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.875779 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.875792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.875812 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.875824 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.979552 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.979613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.979625 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.979645 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:23 crc kubenswrapper[4801]: I1206 03:07:23.979661 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:23Z","lastTransitionTime":"2025-12-06T03:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.088597 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.089377 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.089417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.089445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.090038 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.194471 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.194540 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.194562 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.194593 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.194614 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.212015 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:24 crc kubenswrapper[4801]: E1206 03:07:24.212329 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.298314 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.298409 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.298445 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.298480 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.298504 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.402570 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.402632 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.402650 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.402726 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.402747 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.506792 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.506951 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.506973 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.507002 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.507057 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.610613 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.610689 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.610708 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.610739 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.610793 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.714207 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.714267 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.714290 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.714323 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.714343 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.817978 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.818062 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.818087 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.818125 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.818151 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.921576 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.921642 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.921657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.921687 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:24 crc kubenswrapper[4801]: I1206 03:07:24.921703 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:24Z","lastTransitionTime":"2025-12-06T03:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.025009 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.025068 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.025080 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.025097 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.025111 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.128591 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.128655 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.128669 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.128687 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.128700 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.211536 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.211536 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:25 crc kubenswrapper[4801]: E1206 03:07:25.211693 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.211562 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:25 crc kubenswrapper[4801]: E1206 03:07:25.212410 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:25 crc kubenswrapper[4801]: E1206 03:07:25.212485 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.232121 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.232181 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.232194 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.232215 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.232233 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.335567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.335616 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.335626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.335643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.335655 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.438971 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.439022 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.439034 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.439051 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.439064 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.542245 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.542339 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.542372 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.542408 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.542434 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.646516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.646599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.646618 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.646644 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.646667 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.754547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.754603 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.754619 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.754643 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.754661 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.858310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.858376 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.858400 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.858427 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.858444 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.961140 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.961193 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.961214 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.961244 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:25 crc kubenswrapper[4801]: I1206 03:07:25.961268 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:25Z","lastTransitionTime":"2025-12-06T03:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.065310 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.065389 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.065413 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.065451 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.065475 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.169417 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.169499 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.169516 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.169543 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.169563 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.217909 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:26 crc kubenswrapper[4801]: E1206 03:07:26.218139 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.272301 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.272378 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.272399 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.272430 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.272452 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.376281 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.376328 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.376340 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.376355 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.376365 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.480211 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.480289 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.480306 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.480338 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.480359 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.583555 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.583608 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.583626 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.583648 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.583664 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.688582 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.688665 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.688683 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.688712 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.688730 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.791612 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.791657 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.791666 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.791681 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.791691 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.894472 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.894547 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.894567 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.894599 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.894619 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.997799 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.997867 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.997892 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.997923 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:26 crc kubenswrapper[4801]: I1206 03:07:26.997942 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:26Z","lastTransitionTime":"2025-12-06T03:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.101110 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.101222 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.101247 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.101279 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.101302 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:27Z","lastTransitionTime":"2025-12-06T03:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.204725 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.204825 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.204853 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.204886 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.204909 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:27Z","lastTransitionTime":"2025-12-06T03:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.212242 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.212314 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.212332 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:27 crc kubenswrapper[4801]: E1206 03:07:27.214311 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:27 crc kubenswrapper[4801]: E1206 03:07:27.214449 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:27 crc kubenswrapper[4801]: E1206 03:07:27.214650 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.282524 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.282614 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.282646 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.282893 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.282915 4801 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T03:07:27Z","lastTransitionTime":"2025-12-06T03:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.357983 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp"] Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.359248 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.363418 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.363585 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.363729 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.365102 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.500943 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f18cb7-c10f-466e-a840-2520394a2e3e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.501013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/08f18cb7-c10f-466e-a840-2520394a2e3e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.501116 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08f18cb7-c10f-466e-a840-2520394a2e3e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.501176 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08f18cb7-c10f-466e-a840-2520394a2e3e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.501234 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/08f18cb7-c10f-466e-a840-2520394a2e3e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08f18cb7-c10f-466e-a840-2520394a2e3e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603134 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08f18cb7-c10f-466e-a840-2520394a2e3e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603198 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/08f18cb7-c10f-466e-a840-2520394a2e3e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603281 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f18cb7-c10f-466e-a840-2520394a2e3e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603323 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/08f18cb7-c10f-466e-a840-2520394a2e3e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603406 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/08f18cb7-c10f-466e-a840-2520394a2e3e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.603438 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/08f18cb7-c10f-466e-a840-2520394a2e3e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.605190 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08f18cb7-c10f-466e-a840-2520394a2e3e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.613997 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08f18cb7-c10f-466e-a840-2520394a2e3e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.641958 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08f18cb7-c10f-466e-a840-2520394a2e3e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nkfpp\" (UID: \"08f18cb7-c10f-466e-a840-2520394a2e3e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.693512 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" Dec 06 03:07:27 crc kubenswrapper[4801]: I1206 03:07:27.971696 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" event={"ID":"08f18cb7-c10f-466e-a840-2520394a2e3e","Type":"ContainerStarted","Data":"7b8c8c7d95b802cdb6af23d4c952d09d806e7557c59e4c0c0bf3fd9c795c834c"} Dec 06 03:07:28 crc kubenswrapper[4801]: I1206 03:07:28.212201 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:28 crc kubenswrapper[4801]: E1206 03:07:28.212439 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:29 crc kubenswrapper[4801]: I1206 03:07:29.032453 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:29 crc kubenswrapper[4801]: E1206 03:07:29.032687 4801 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:07:29 crc kubenswrapper[4801]: E1206 03:07:29.032851 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs podName:134354b0-1613-4536-aaf8-4e5ad12705f9 nodeName:}" failed. No retries permitted until 2025-12-06 03:08:33.03281476 +0000 UTC m=+166.155422372 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs") pod "network-metrics-daemon-wpnbx" (UID: "134354b0-1613-4536-aaf8-4e5ad12705f9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 03:07:29 crc kubenswrapper[4801]: I1206 03:07:29.212240 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:29 crc kubenswrapper[4801]: I1206 03:07:29.212363 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:29 crc kubenswrapper[4801]: E1206 03:07:29.212424 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:29 crc kubenswrapper[4801]: I1206 03:07:29.212556 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:29 crc kubenswrapper[4801]: E1206 03:07:29.212889 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:29 crc kubenswrapper[4801]: E1206 03:07:29.213069 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:29 crc kubenswrapper[4801]: I1206 03:07:29.984273 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" event={"ID":"08f18cb7-c10f-466e-a840-2520394a2e3e","Type":"ContainerStarted","Data":"89b2ce1071cec50aa1d708f22460126d5a61fab308fc501ae809f6ea336dbd46"} Dec 06 03:07:30 crc kubenswrapper[4801]: I1206 03:07:30.003294 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nkfpp" podStartSLOduration=81.003279882 podStartE2EDuration="1m21.003279882s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:30.002701726 +0000 UTC m=+103.125309348" watchObservedRunningTime="2025-12-06 03:07:30.003279882 +0000 UTC m=+103.125887454" Dec 06 03:07:30 crc kubenswrapper[4801]: I1206 03:07:30.211954 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:30 crc kubenswrapper[4801]: E1206 03:07:30.212117 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:31 crc kubenswrapper[4801]: I1206 03:07:31.211894 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:31 crc kubenswrapper[4801]: I1206 03:07:31.211923 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:31 crc kubenswrapper[4801]: I1206 03:07:31.212040 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:31 crc kubenswrapper[4801]: E1206 03:07:31.212271 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:31 crc kubenswrapper[4801]: E1206 03:07:31.212645 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:31 crc kubenswrapper[4801]: E1206 03:07:31.212816 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:32 crc kubenswrapper[4801]: I1206 03:07:32.211686 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:32 crc kubenswrapper[4801]: E1206 03:07:32.211955 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:32 crc kubenswrapper[4801]: I1206 03:07:32.213186 4801 scope.go:117] "RemoveContainer" containerID="9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e" Dec 06 03:07:32 crc kubenswrapper[4801]: E1206 03:07:32.213570 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8qjvm_openshift-ovn-kubernetes(2cd76211-e203-4b5b-98b0-102d3d67315d)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" Dec 06 03:07:33 crc kubenswrapper[4801]: I1206 03:07:33.212327 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:33 crc kubenswrapper[4801]: I1206 03:07:33.212424 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:33 crc kubenswrapper[4801]: I1206 03:07:33.212366 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:33 crc kubenswrapper[4801]: E1206 03:07:33.212586 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:33 crc kubenswrapper[4801]: E1206 03:07:33.212788 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:33 crc kubenswrapper[4801]: E1206 03:07:33.212907 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:34 crc kubenswrapper[4801]: I1206 03:07:34.211896 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:34 crc kubenswrapper[4801]: E1206 03:07:34.212311 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:35 crc kubenswrapper[4801]: I1206 03:07:35.212288 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:35 crc kubenswrapper[4801]: I1206 03:07:35.212345 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:35 crc kubenswrapper[4801]: I1206 03:07:35.212309 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:35 crc kubenswrapper[4801]: E1206 03:07:35.212469 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:35 crc kubenswrapper[4801]: E1206 03:07:35.212592 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:35 crc kubenswrapper[4801]: E1206 03:07:35.212732 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:36 crc kubenswrapper[4801]: I1206 03:07:36.212004 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:36 crc kubenswrapper[4801]: E1206 03:07:36.212694 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:37 crc kubenswrapper[4801]: I1206 03:07:37.212010 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:37 crc kubenswrapper[4801]: I1206 03:07:37.212147 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:37 crc kubenswrapper[4801]: E1206 03:07:37.214095 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:37 crc kubenswrapper[4801]: I1206 03:07:37.214125 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:37 crc kubenswrapper[4801]: E1206 03:07:37.214306 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:37 crc kubenswrapper[4801]: E1206 03:07:37.214970 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:38 crc kubenswrapper[4801]: I1206 03:07:38.211817 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:38 crc kubenswrapper[4801]: E1206 03:07:38.212405 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:39 crc kubenswrapper[4801]: I1206 03:07:39.211980 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:39 crc kubenswrapper[4801]: I1206 03:07:39.212017 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:39 crc kubenswrapper[4801]: E1206 03:07:39.212206 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:39 crc kubenswrapper[4801]: I1206 03:07:39.212299 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:39 crc kubenswrapper[4801]: E1206 03:07:39.212330 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:39 crc kubenswrapper[4801]: E1206 03:07:39.212620 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:40 crc kubenswrapper[4801]: I1206 03:07:40.211548 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:40 crc kubenswrapper[4801]: E1206 03:07:40.211787 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:41 crc kubenswrapper[4801]: I1206 03:07:41.212367 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:41 crc kubenswrapper[4801]: I1206 03:07:41.212415 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:41 crc kubenswrapper[4801]: E1206 03:07:41.212612 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:41 crc kubenswrapper[4801]: I1206 03:07:41.212665 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:41 crc kubenswrapper[4801]: E1206 03:07:41.212942 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:41 crc kubenswrapper[4801]: E1206 03:07:41.213196 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:42 crc kubenswrapper[4801]: I1206 03:07:42.212083 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:42 crc kubenswrapper[4801]: E1206 03:07:42.212285 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:43 crc kubenswrapper[4801]: I1206 03:07:43.211601 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:43 crc kubenswrapper[4801]: I1206 03:07:43.211664 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:43 crc kubenswrapper[4801]: E1206 03:07:43.212444 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:43 crc kubenswrapper[4801]: I1206 03:07:43.211736 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:43 crc kubenswrapper[4801]: E1206 03:07:43.212718 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:43 crc kubenswrapper[4801]: E1206 03:07:43.213111 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.038950 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/1.log" Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.039693 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/0.log" Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.039830 4801 generic.go:334] "Generic (PLEG): container finished" podID="9695c5a7-610b-4c76-aa6f-b4f06f20823e" containerID="bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af" exitCode=1 Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.039898 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerDied","Data":"bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af"} Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.039990 4801 scope.go:117] "RemoveContainer" containerID="a4d6468beae094087d48bbe611064ed8c9c8429b0cc195f7921d5085192d2e33" Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.040698 4801 scope.go:117] "RemoveContainer" containerID="bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af" Dec 06 03:07:44 crc kubenswrapper[4801]: E1206 03:07:44.041048 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-4gxwt_openshift-multus(9695c5a7-610b-4c76-aa6f-b4f06f20823e)\"" pod="openshift-multus/multus-4gxwt" podUID="9695c5a7-610b-4c76-aa6f-b4f06f20823e" Dec 06 03:07:44 crc kubenswrapper[4801]: I1206 03:07:44.211703 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:44 crc kubenswrapper[4801]: E1206 03:07:44.212118 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:45 crc kubenswrapper[4801]: I1206 03:07:45.045082 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/1.log" Dec 06 03:07:45 crc kubenswrapper[4801]: I1206 03:07:45.212168 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:45 crc kubenswrapper[4801]: I1206 03:07:45.212316 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:45 crc kubenswrapper[4801]: I1206 03:07:45.212379 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:45 crc kubenswrapper[4801]: E1206 03:07:45.212376 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:45 crc kubenswrapper[4801]: E1206 03:07:45.212481 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:45 crc kubenswrapper[4801]: E1206 03:07:45.213102 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:46 crc kubenswrapper[4801]: I1206 03:07:46.211863 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:46 crc kubenswrapper[4801]: E1206 03:07:46.212563 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:46 crc kubenswrapper[4801]: I1206 03:07:46.212808 4801 scope.go:117] "RemoveContainer" containerID="9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.054089 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/3.log" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.056557 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerStarted","Data":"c0d94a9b76a1f23733f58d1d54b46267232b326ffef3a6f088ad1070affb1cfa"} Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.057083 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.098698 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podStartSLOduration=98.098675152 podStartE2EDuration="1m38.098675152s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:07:47.09824376 +0000 UTC m=+120.220851332" watchObservedRunningTime="2025-12-06 03:07:47.098675152 +0000 UTC m=+120.221282724" Dec 06 03:07:47 crc kubenswrapper[4801]: E1206 03:07:47.174854 4801 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.212330 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.212350 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.212443 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:47 crc kubenswrapper[4801]: E1206 03:07:47.213657 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:47 crc kubenswrapper[4801]: E1206 03:07:47.213738 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:47 crc kubenswrapper[4801]: E1206 03:07:47.213818 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:47 crc kubenswrapper[4801]: E1206 03:07:47.292741 4801 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 03:07:47 crc kubenswrapper[4801]: I1206 03:07:47.653967 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wpnbx"] Dec 06 03:07:48 crc kubenswrapper[4801]: I1206 03:07:48.059432 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:48 crc kubenswrapper[4801]: E1206 03:07:48.060018 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:48 crc kubenswrapper[4801]: I1206 03:07:48.211714 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:48 crc kubenswrapper[4801]: E1206 03:07:48.212125 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:49 crc kubenswrapper[4801]: I1206 03:07:49.211612 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:49 crc kubenswrapper[4801]: I1206 03:07:49.211913 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:49 crc kubenswrapper[4801]: E1206 03:07:49.212050 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:49 crc kubenswrapper[4801]: E1206 03:07:49.212251 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:50 crc kubenswrapper[4801]: I1206 03:07:50.211643 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:50 crc kubenswrapper[4801]: I1206 03:07:50.211703 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:50 crc kubenswrapper[4801]: E1206 03:07:50.212057 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:50 crc kubenswrapper[4801]: E1206 03:07:50.212477 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:51 crc kubenswrapper[4801]: I1206 03:07:51.211825 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:51 crc kubenswrapper[4801]: E1206 03:07:51.211954 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:51 crc kubenswrapper[4801]: I1206 03:07:51.211841 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:51 crc kubenswrapper[4801]: E1206 03:07:51.212252 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:52 crc kubenswrapper[4801]: I1206 03:07:52.212022 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:52 crc kubenswrapper[4801]: I1206 03:07:52.212078 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:52 crc kubenswrapper[4801]: E1206 03:07:52.212293 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:52 crc kubenswrapper[4801]: E1206 03:07:52.212459 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:52 crc kubenswrapper[4801]: E1206 03:07:52.294617 4801 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 03:07:53 crc kubenswrapper[4801]: I1206 03:07:53.212562 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:53 crc kubenswrapper[4801]: E1206 03:07:53.212862 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:53 crc kubenswrapper[4801]: I1206 03:07:53.213127 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:53 crc kubenswrapper[4801]: E1206 03:07:53.213390 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:54 crc kubenswrapper[4801]: I1206 03:07:54.211356 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:54 crc kubenswrapper[4801]: I1206 03:07:54.211412 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:54 crc kubenswrapper[4801]: E1206 03:07:54.211587 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:54 crc kubenswrapper[4801]: E1206 03:07:54.211811 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:55 crc kubenswrapper[4801]: I1206 03:07:55.211437 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:55 crc kubenswrapper[4801]: E1206 03:07:55.211615 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:55 crc kubenswrapper[4801]: I1206 03:07:55.211644 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:55 crc kubenswrapper[4801]: E1206 03:07:55.211743 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:56 crc kubenswrapper[4801]: I1206 03:07:56.211349 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:56 crc kubenswrapper[4801]: I1206 03:07:56.211427 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:56 crc kubenswrapper[4801]: E1206 03:07:56.211667 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:56 crc kubenswrapper[4801]: E1206 03:07:56.211898 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:57 crc kubenswrapper[4801]: I1206 03:07:57.212176 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:57 crc kubenswrapper[4801]: I1206 03:07:57.212183 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:57 crc kubenswrapper[4801]: E1206 03:07:57.214573 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:07:57 crc kubenswrapper[4801]: E1206 03:07:57.214637 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:57 crc kubenswrapper[4801]: I1206 03:07:57.214921 4801 scope.go:117] "RemoveContainer" containerID="bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af" Dec 06 03:07:57 crc kubenswrapper[4801]: E1206 03:07:57.296354 4801 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 03:07:58 crc kubenswrapper[4801]: I1206 03:07:58.212287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:07:58 crc kubenswrapper[4801]: E1206 03:07:58.212528 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:07:58 crc kubenswrapper[4801]: I1206 03:07:58.213726 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:07:58 crc kubenswrapper[4801]: E1206 03:07:58.214145 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:07:59 crc kubenswrapper[4801]: I1206 03:07:59.108814 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/1.log" Dec 06 03:07:59 crc kubenswrapper[4801]: I1206 03:07:59.108884 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerStarted","Data":"e7ad082dc60aaf9e0b81f57ccc6e014f7624f603687c31db16438aa3fd0fb4a3"} Dec 06 03:07:59 crc kubenswrapper[4801]: I1206 03:07:59.211857 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:07:59 crc kubenswrapper[4801]: E1206 03:07:59.212034 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:07:59 crc kubenswrapper[4801]: I1206 03:07:59.212269 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:07:59 crc kubenswrapper[4801]: E1206 03:07:59.212329 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:08:00 crc kubenswrapper[4801]: I1206 03:08:00.212349 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:00 crc kubenswrapper[4801]: E1206 03:08:00.212493 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:08:00 crc kubenswrapper[4801]: I1206 03:08:00.212553 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:08:00 crc kubenswrapper[4801]: E1206 03:08:00.212666 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:08:01 crc kubenswrapper[4801]: I1206 03:08:01.212383 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:08:01 crc kubenswrapper[4801]: E1206 03:08:01.212531 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 03:08:01 crc kubenswrapper[4801]: I1206 03:08:01.212383 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:01 crc kubenswrapper[4801]: E1206 03:08:01.212797 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 03:08:02 crc kubenswrapper[4801]: I1206 03:08:02.211524 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:08:02 crc kubenswrapper[4801]: I1206 03:08:02.211524 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:02 crc kubenswrapper[4801]: E1206 03:08:02.211854 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-wpnbx" podUID="134354b0-1613-4536-aaf8-4e5ad12705f9" Dec 06 03:08:02 crc kubenswrapper[4801]: E1206 03:08:02.211953 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 03:08:03 crc kubenswrapper[4801]: I1206 03:08:03.211578 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:03 crc kubenswrapper[4801]: I1206 03:08:03.211546 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:08:03 crc kubenswrapper[4801]: I1206 03:08:03.217703 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 03:08:03 crc kubenswrapper[4801]: I1206 03:08:03.217817 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 03:08:03 crc kubenswrapper[4801]: I1206 03:08:03.217703 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 03:08:03 crc kubenswrapper[4801]: I1206 03:08:03.217947 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 03:08:04 crc kubenswrapper[4801]: I1206 03:08:04.212036 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:08:04 crc kubenswrapper[4801]: I1206 03:08:04.212157 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:04 crc kubenswrapper[4801]: I1206 03:08:04.214884 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 03:08:04 crc kubenswrapper[4801]: I1206 03:08:04.216067 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.198282 4801 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.259723 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zsvsf"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.260802 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.262514 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.263243 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.265203 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.266055 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.266949 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gcgft"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.267494 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.267713 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.267833 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.268135 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.268816 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.269714 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.271898 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cqsjn"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.272420 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.274046 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8b96"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.274801 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.275636 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmsxh"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.277287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.278291 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qnr4c"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.279192 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.280339 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.280968 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.287106 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.287149 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.288011 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.292921 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.293983 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.296803 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.298286 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.298501 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.302029 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.302315 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.309128 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.309419 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.309581 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.309895 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.309959 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.311061 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.311178 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.311288 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.311418 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.311795 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.311915 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.313148 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.313325 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.313466 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.313646 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314548 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5e2010c-d755-4f50-b5de-799ab1c30e5a-images\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314620 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70437be2-9089-427f-8daa-22a299ed14b8-node-pullsecrets\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314695 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdq6\" (UniqueName: \"kubernetes.io/projected/5b9771c2-4f3e-4c26-ad26-fa67911f1169-kube-api-access-8qdq6\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314729 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314769 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314809 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70437be2-9089-427f-8daa-22a299ed14b8-audit-dir\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-serving-cert\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314880 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314914 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314913 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314949 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-audit-dir\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.314989 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-audit\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315021 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-machine-approver-tls\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315084 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-audit-policies\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315109 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-encryption-config\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315134 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94j67\" (UniqueName: \"kubernetes.io/projected/d5e2010c-d755-4f50-b5de-799ab1c30e5a-kube-api-access-94j67\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315163 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpqls\" (UniqueName: \"kubernetes.io/projected/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-kube-api-access-dpqls\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315191 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315216 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxsh6\" (UniqueName: \"kubernetes.io/projected/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-kube-api-access-kxsh6\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315244 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315315 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315341 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e2010c-d755-4f50-b5de-799ab1c30e5a-config\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315364 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-etcd-client\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315389 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-etcd-client\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315413 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315438 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp7w6\" (UniqueName: \"kubernetes.io/projected/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-kube-api-access-mp7w6\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315468 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5e2010c-d755-4f50-b5de-799ab1c30e5a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315492 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315519 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c50977b-ea29-4832-927a-64352613ccd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315543 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315568 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-auth-proxy-config\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315598 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315626 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-encryption-config\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315653 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c50977b-ea29-4832-927a-64352613ccd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315679 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-image-import-ca\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315705 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-config\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315780 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kn9p\" (UniqueName: \"kubernetes.io/projected/1c50977b-ea29-4832-927a-64352613ccd9-kube-api-access-7kn9p\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315810 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-etcd-serving-ca\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315834 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92z67\" (UniqueName: \"kubernetes.io/projected/70437be2-9089-427f-8daa-22a299ed14b8-kube-api-access-92z67\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315861 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315883 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-dir\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315905 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-config\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315926 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-policies\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315951 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-serving-cert\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315974 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.315999 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.316037 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f492b"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.316661 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-l87sx"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.316716 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321037 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321293 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321490 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321542 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.313471 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321677 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321854 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321883 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.321890 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.322277 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.322508 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.322628 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.322538 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.322881 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323130 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323133 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323297 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323474 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323483 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323856 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323913 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.323999 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324119 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324244 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324322 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324423 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324510 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324515 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324054 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324289 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324725 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324745 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324843 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324898 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324977 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.325005 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324251 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.325050 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.324048 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.325207 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.325329 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.325403 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.327891 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zscxm"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.328624 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.335413 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.336211 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96wqb"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.336677 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.336967 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.340493 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.340741 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.341142 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.342334 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.347485 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.354075 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7679s"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.354157 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.354403 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.355023 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.355777 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.356387 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.356487 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.374853 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.378216 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.382921 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.385041 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.385585 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.388773 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d87wj"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.389765 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.390774 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.391043 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.393194 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.395048 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.395857 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k47rq"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.398432 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.396034 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.398817 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.397511 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.396259 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.397158 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.397278 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.399261 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.397343 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.399348 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.400409 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.400587 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.400747 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.401121 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.402684 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.403037 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.403437 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.403726 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.403939 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.404293 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.407022 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.407197 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.407339 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.407541 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.408901 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.409647 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.410204 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.410521 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.410988 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.411487 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.412683 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.412826 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h498b"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.412907 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.413384 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.413675 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bb2lf"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.413959 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.414781 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.415021 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.416260 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417236 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417248 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417310 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e2010c-d755-4f50-b5de-799ab1c30e5a-config\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417331 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-etcd-client\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417349 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-etcd-client\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417367 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp7w6\" (UniqueName: \"kubernetes.io/projected/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-kube-api-access-mp7w6\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417391 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0a30cb-3dee-44de-a8c3-affda5cb644a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417413 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417438 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5e2010c-d755-4f50-b5de-799ab1c30e5a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417457 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c50977b-ea29-4832-927a-64352613ccd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417473 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-auth-proxy-config\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417514 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c50977b-ea29-4832-927a-64352613ccd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417551 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jlvh\" (UniqueName: \"kubernetes.io/projected/3a0a30cb-3dee-44de-a8c3-affda5cb644a-kube-api-access-8jlvh\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417586 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-encryption-config\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417605 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-image-import-ca\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417620 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-config\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417653 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kn9p\" (UniqueName: \"kubernetes.io/projected/1c50977b-ea29-4832-927a-64352613ccd9-kube-api-access-7kn9p\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417671 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-etcd-serving-ca\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417688 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92z67\" (UniqueName: \"kubernetes.io/projected/70437be2-9089-427f-8daa-22a299ed14b8-kube-api-access-92z67\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417708 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417730 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-dir\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417745 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-config\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417777 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-config\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417813 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-policies\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417832 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0a30cb-3dee-44de-a8c3-affda5cb644a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417851 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-serving-cert\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417872 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417895 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5e2010c-d755-4f50-b5de-799ab1c30e5a-images\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417911 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70437be2-9089-427f-8daa-22a299ed14b8-node-pullsecrets\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417932 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417947 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdq6\" (UniqueName: \"kubernetes.io/projected/5b9771c2-4f3e-4c26-ad26-fa67911f1169-kube-api-access-8qdq6\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417964 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.417991 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70437be2-9089-427f-8daa-22a299ed14b8-audit-dir\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418007 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-serving-cert\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418025 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418042 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418061 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mbxg\" (UniqueName: \"kubernetes.io/projected/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-kube-api-access-4mbxg\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418080 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-audit-dir\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418096 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418113 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-audit\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418130 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418146 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-machine-approver-tls\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418162 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-encryption-config\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418189 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-audit-policies\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418205 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-serving-cert\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94j67\" (UniqueName: \"kubernetes.io/projected/d5e2010c-d755-4f50-b5de-799ab1c30e5a-kube-api-access-94j67\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418240 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpqls\" (UniqueName: \"kubernetes.io/projected/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-kube-api-access-dpqls\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418295 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxsh6\" (UniqueName: \"kubernetes.io/projected/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-kube-api-access-kxsh6\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418313 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.418337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.419692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-image-import-ca\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.420110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-config\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.422213 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.424618 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-dir\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.425333 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2lngl"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.425904 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426113 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426183 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426290 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/70437be2-9089-427f-8daa-22a299ed14b8-node-pullsecrets\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426418 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426532 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-audit-dir\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426618 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426850 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426888 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-policies\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.426928 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c50977b-ea29-4832-927a-64352613ccd9-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.427070 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-audit\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.427660 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-etcd-serving-ca\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.442074 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-encryption-config\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.442192 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c50977b-ea29-4832-927a-64352613ccd9-config\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.442609 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.442889 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.443249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-serving-cert\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.443542 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.444170 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70437be2-9089-427f-8daa-22a299ed14b8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.444312 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.444651 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.445267 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-etcd-client\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.447393 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.447969 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.448502 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.448994 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.449509 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d5e2010c-d755-4f50-b5de-799ab1c30e5a-images\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.449924 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5e2010c-d755-4f50-b5de-799ab1c30e5a-config\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.450005 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.450451 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.450511 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zsvsf"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.450635 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/70437be2-9089-427f-8daa-22a299ed14b8-audit-dir\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.451045 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5e2010c-d755-4f50-b5de-799ab1c30e5a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.452001 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-encryption-config\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.452711 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.454210 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.456077 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.457692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-audit-policies\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.457729 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.459327 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/70437be2-9089-427f-8daa-22a299ed14b8-serving-cert\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.460421 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.461398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.461468 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.461480 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-etcd-client\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.461742 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-config\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.461975 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-auth-proxy-config\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.463131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.464253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-serving-cert\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.464382 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.466953 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.467569 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.468610 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zst5x"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.469671 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.469936 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.473092 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cqsjn"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.473157 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d87wj"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.473172 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d79m7"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.474431 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.479231 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.479884 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-machine-approver-tls\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.479957 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.480599 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.483615 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.483828 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmsxh"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.485898 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.489743 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8b96"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.495953 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.497627 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.499071 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f492b"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.500469 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.501855 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l87sx"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.502931 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.508337 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.510952 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zscxm"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.512240 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.513317 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gcgft"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.514329 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.515589 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qnr4c"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.516675 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.517763 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.518780 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96wqb"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.518976 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mbxg\" (UniqueName: \"kubernetes.io/projected/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-kube-api-access-4mbxg\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519013 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-serving-cert\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519113 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519146 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0a30cb-3dee-44de-a8c3-affda5cb644a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519177 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jlvh\" (UniqueName: \"kubernetes.io/projected/3a0a30cb-3dee-44de-a8c3-affda5cb644a-kube-api-access-8jlvh\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519238 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-config\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0a30cb-3dee-44de-a8c3-affda5cb644a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.519853 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h498b"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.520256 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-service-ca-bundle\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.520281 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-config\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.520658 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.522346 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.522637 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a0a30cb-3dee-44de-a8c3-affda5cb644a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.522891 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a0a30cb-3dee-44de-a8c3-affda5cb644a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.523492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-serving-cert\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.523832 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.524265 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.524708 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-fm29j"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.526256 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.526448 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.526866 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bb2lf"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.528241 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-psxlr"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.529173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.529266 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.530267 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2lngl"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.531306 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7679s"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.533138 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d79m7"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.534235 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.535653 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.537089 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zst5x"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.538094 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.539238 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-psxlr"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.540619 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.542530 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-b46nz"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.543422 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.543628 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b46nz"] Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.543812 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.563332 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.588071 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.603947 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.625475 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.643591 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.663662 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.682410 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.744013 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.764700 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.784139 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.803375 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.823698 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.844259 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.864257 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.885146 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.904560 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.924435 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.943385 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.964392 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 03:08:08 crc kubenswrapper[4801]: I1206 03:08:08.983319 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.004053 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.024035 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.043869 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.063408 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.084886 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.104710 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.123607 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.151923 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.164316 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.184307 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.204873 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.224660 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.244623 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.264040 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.283446 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.304682 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.324196 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.343796 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.363737 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.384014 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.403338 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.422496 4801 request.go:700] Waited for 1.010429376s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.424529 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.443033 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.464194 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.483487 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.503632 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.533229 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.544602 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.564115 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.583837 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.603804 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.623524 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.643425 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.685663 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kn9p\" (UniqueName: \"kubernetes.io/projected/1c50977b-ea29-4832-927a-64352613ccd9-kube-api-access-7kn9p\") pod \"openshift-apiserver-operator-796bbdcf4f-dqs5h\" (UID: \"1c50977b-ea29-4832-927a-64352613ccd9\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.700199 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92z67\" (UniqueName: \"kubernetes.io/projected/70437be2-9089-427f-8daa-22a299ed14b8-kube-api-access-92z67\") pod \"apiserver-76f77b778f-gcgft\" (UID: \"70437be2-9089-427f-8daa-22a299ed14b8\") " pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.724103 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.724299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp7w6\" (UniqueName: \"kubernetes.io/projected/a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d-kube-api-access-mp7w6\") pod \"apiserver-7bbb656c7d-x65wm\" (UID: \"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.743343 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.762850 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.783991 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.804675 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.824539 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.842285 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.844718 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.880026 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.889018 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94j67\" (UniqueName: \"kubernetes.io/projected/d5e2010c-d755-4f50-b5de-799ab1c30e5a-kube-api-access-94j67\") pod \"machine-api-operator-5694c8668f-zsvsf\" (UID: \"d5e2010c-d755-4f50-b5de-799ab1c30e5a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.904681 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.905907 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpqls\" (UniqueName: \"kubernetes.io/projected/d58c5185-9cfb-4e5f-956e-d12e12b5e81e-kube-api-access-dpqls\") pod \"openshift-config-operator-7777fb866f-p8b96\" (UID: \"d58c5185-9cfb-4e5f-956e-d12e12b5e81e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.927318 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.929149 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxsh6\" (UniqueName: \"kubernetes.io/projected/76b3d36e-5cdb-40d7-b0e9-34e712c61d13-kube-api-access-kxsh6\") pod \"machine-approver-56656f9798-k54lb\" (UID: \"76b3d36e-5cdb-40d7-b0e9-34e712c61d13\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.944828 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.964198 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.978951 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:09 crc kubenswrapper[4801]: I1206 03:08:09.983577 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.026014 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.031423 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdq6\" (UniqueName: \"kubernetes.io/projected/5b9771c2-4f3e-4c26-ad26-fa67911f1169-kube-api-access-8qdq6\") pod \"oauth-openshift-558db77b4-cqsjn\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.045046 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.064791 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.083115 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.095955 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.109717 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.110480 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-gcgft"] Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.124058 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.128411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.142355 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h"] Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.143741 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 03:08:10 crc kubenswrapper[4801]: W1206 03:08:10.153923 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c50977b_ea29_4832_927a_64352613ccd9.slice/crio-de4f4ea0606c7534a2f6c0179db442c773192de1d8d9c48b8c0d883d19ed5895 WatchSource:0}: Error finding container de4f4ea0606c7534a2f6c0179db442c773192de1d8d9c48b8c0d883d19ed5895: Status 404 returned error can't find the container with id de4f4ea0606c7534a2f6c0179db442c773192de1d8d9c48b8c0d883d19ed5895 Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.164123 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.185529 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm"] Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.188088 4801 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.192737 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" event={"ID":"1c50977b-ea29-4832-927a-64352613ccd9","Type":"ContainerStarted","Data":"de4f4ea0606c7534a2f6c0179db442c773192de1d8d9c48b8c0d883d19ed5895"} Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.197441 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" event={"ID":"76b3d36e-5cdb-40d7-b0e9-34e712c61d13","Type":"ContainerStarted","Data":"cb60ed4199c5c9cd99d2a32a7e1916469083476885579c97d37e2bcc1d2a9347"} Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.206419 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.207040 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" event={"ID":"70437be2-9089-427f-8daa-22a299ed14b8","Type":"ContainerStarted","Data":"d977b710a6c42804bc8241beffe83c69c28cf35287cb7582940db7eb0817a42a"} Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.217359 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:10 crc kubenswrapper[4801]: W1206 03:08:10.218199 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ad9cbd_c157_4563_9c49_2b2e8dc9a13d.slice/crio-4b1e91525d27f881af7e9b97e0a3c83606992d7dc5d84b61e8ca7f0bf9009526 WatchSource:0}: Error finding container 4b1e91525d27f881af7e9b97e0a3c83606992d7dc5d84b61e8ca7f0bf9009526: Status 404 returned error can't find the container with id 4b1e91525d27f881af7e9b97e0a3c83606992d7dc5d84b61e8ca7f0bf9009526 Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.223660 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.245303 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.248580 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-p8b96"] Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.282576 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mbxg\" (UniqueName: \"kubernetes.io/projected/c3916e25-63a1-4aac-a9c6-75a5c6d4ee51-kube-api-access-4mbxg\") pod \"authentication-operator-69f744f599-xmsxh\" (UID: \"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.298280 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.303280 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.305976 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jlvh\" (UniqueName: \"kubernetes.io/projected/3a0a30cb-3dee-44de-a8c3-affda5cb644a-kube-api-access-8jlvh\") pod \"openshift-controller-manager-operator-756b6f6bc6-7pjv6\" (UID: \"3a0a30cb-3dee-44de-a8c3-affda5cb644a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.325037 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.343729 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.363278 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.383108 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.390338 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cqsjn"] Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.400139 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.403822 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.422675 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.441560 4801 request.go:700] Waited for 1.897397565s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.443680 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.464395 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.484041 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 03:08:10 crc kubenswrapper[4801]: I1206 03:08:10.515649 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zsvsf"] Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.891140 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.891259 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910521 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhq9l\" (UniqueName: \"kubernetes.io/projected/349c2ebc-4077-42b4-b295-41d0a3a18e74-kube-api-access-bhq9l\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910606 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-tls\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910629 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-oauth-config\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910662 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2d5z\" (UniqueName: \"kubernetes.io/projected/ac114e18-3e28-463f-ad3c-38ae077fdac1-kube-api-access-l2d5z\") pod \"cluster-samples-operator-665b6dd947-pz6qq\" (UID: \"ac114e18-3e28-463f-ad3c-38ae077fdac1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910679 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-trusted-ca\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910701 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-trusted-ca-bundle\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910717 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-client-ca\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910803 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-service-ca\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910836 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac114e18-3e28-463f-ad3c-38ae077fdac1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pz6qq\" (UID: \"ac114e18-3e28-463f-ad3c-38ae077fdac1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910856 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a374f1-02f7-4092-8027-e1967bf9190f-serving-cert\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910893 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-bound-sa-token\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910915 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vb7d\" (UniqueName: \"kubernetes.io/projected/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-kube-api-access-9vb7d\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910946 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910976 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7fdd24fe-710e-4452-a48a-1d59910c78e3-metrics-tls\") pod \"dns-operator-744455d44c-7679s\" (UID: \"7fdd24fe-710e-4452-a48a-1d59910c78e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.910993 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-config\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-certificates\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911093 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvlnk\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-kube-api-access-hvlnk\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911126 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f37838-e5ab-461e-833e-d07b0bf13cf3-config\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911146 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmmmc\" (UniqueName: \"kubernetes.io/projected/e3827827-d4d4-4506-8318-6867da12c067-kube-api-access-zmmmc\") pod \"downloads-7954f5f757-l87sx\" (UID: \"e3827827-d4d4-4506-8318-6867da12c067\") " pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911200 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f37838-e5ab-461e-833e-d07b0bf13cf3-serving-cert\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911218 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-client\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911247 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nw4n\" (UniqueName: \"kubernetes.io/projected/4fac250c-7d1a-435f-a613-8c4646b7be9d-kube-api-access-9nw4n\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911288 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-oauth-serving-cert\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911306 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-client-ca\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911336 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5f37838-e5ab-461e-833e-d07b0bf13cf3-trusted-ca\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911353 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-config\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911373 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-ca\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911393 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-serving-cert\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911416 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr677\" (UniqueName: \"kubernetes.io/projected/7fdd24fe-710e-4452-a48a-1d59910c78e3-kube-api-access-jr677\") pod \"dns-operator-744455d44c-7679s\" (UID: \"7fdd24fe-710e-4452-a48a-1d59910c78e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911438 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-config\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911480 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-config\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.911502 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349c2ebc-4077-42b4-b295-41d0a3a18e74-serving-cert\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.912459 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.912489 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrgm\" (UniqueName: \"kubernetes.io/projected/48133237-eb56-4344-8fb4-8e61ce32bf37-kube-api-access-lnrgm\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.912512 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txmkr\" (UniqueName: \"kubernetes.io/projected/c5f37838-e5ab-461e-833e-d07b0bf13cf3-kube-api-access-txmkr\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.912549 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-service-ca\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.912577 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:12 crc kubenswrapper[4801]: E1206 03:08:12.914305 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.414286224 +0000 UTC m=+146.536893786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.924873 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48133237-eb56-4344-8fb4-8e61ce32bf37-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.925017 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:12 crc kubenswrapper[4801]: W1206 03:08:12.952226 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9771c2_4f3e_4c26_ad26_fa67911f1169.slice/crio-4932f68c1ccbb0f462a90b3b69768102bb12b525560e94dc089a84aa99c9fefa WatchSource:0}: Error finding container 4932f68c1ccbb0f462a90b3b69768102bb12b525560e94dc089a84aa99c9fefa: Status 404 returned error can't find the container with id 4932f68c1ccbb0f462a90b3b69768102bb12b525560e94dc089a84aa99c9fefa Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.961582 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" event={"ID":"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d","Type":"ContainerStarted","Data":"4b1e91525d27f881af7e9b97e0a3c83606992d7dc5d84b61e8ca7f0bf9009526"} Dec 06 03:08:12 crc kubenswrapper[4801]: I1206 03:08:12.961622 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" event={"ID":"d58c5185-9cfb-4e5f-956e-d12e12b5e81e","Type":"ContainerStarted","Data":"09a4a9c3f655b2ceb62f8835b72959b688fb1e4e7eb21fef95e36b0b12312fdb"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.033269 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.033649 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.533628089 +0000 UTC m=+146.656235661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034016 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fefbdaaf-1e48-4731-93e6-285fff94b582-cert\") pod \"ingress-canary-b46nz\" (UID: \"fefbdaaf-1e48-4731-93e6-285fff94b582\") " pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034039 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-profile-collector-cert\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034078 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-certificates\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034095 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvlnk\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-kube-api-access-hvlnk\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034112 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4phd\" (UniqueName: \"kubernetes.io/projected/e80f1b1d-bd4e-4890-88eb-daf951411754-kube-api-access-t4phd\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvvm\" (UniqueName: \"kubernetes.io/projected/cdeebf50-e2da-438a-b872-64c4a8d43d6e-kube-api-access-npvvm\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034159 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbvp\" (UniqueName: \"kubernetes.io/projected/a0904103-6105-41fd-b158-2f8a5a99b773-kube-api-access-7kbvp\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdeebf50-e2da-438a-b872-64c4a8d43d6e-webhook-cert\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034201 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwzr\" (UniqueName: \"kubernetes.io/projected/c4257778-cdcb-4430-beb7-a47766082129-kube-api-access-2hwzr\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034221 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4c483458-0e51-4a45-86bc-df13cc609b9d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xlgtc\" (UID: \"4c483458-0e51-4a45-86bc-df13cc609b9d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034239 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5f37838-e5ab-461e-833e-d07b0bf13cf3-trusted-ca\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034256 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-config\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034275 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32975314-a63c-4c90-a5f6-6bee14a860c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034302 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-ca\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034318 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034334 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-srv-cert\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034357 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr677\" (UniqueName: \"kubernetes.io/projected/7fdd24fe-710e-4452-a48a-1d59910c78e3-kube-api-access-jr677\") pod \"dns-operator-744455d44c-7679s\" (UID: \"7fdd24fe-710e-4452-a48a-1d59910c78e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034374 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfc5z\" (UniqueName: \"kubernetes.io/projected/6c18f03b-59b4-4759-ae52-198497bc084d-kube-api-access-hfc5z\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034414 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqts\" (UniqueName: \"kubernetes.io/projected/929cf2ac-1dab-4c49-89ac-243e45f24493-kube-api-access-4vqts\") pod \"package-server-manager-789f6589d5-hvl68\" (UID: \"929cf2ac-1dab-4c49-89ac-243e45f24493\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034514 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjg4b\" (UniqueName: \"kubernetes.io/projected/fefbdaaf-1e48-4731-93e6-285fff94b582-kube-api-access-wjg4b\") pod \"ingress-canary-b46nz\" (UID: \"fefbdaaf-1e48-4731-93e6-285fff94b582\") " pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034576 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bc4223-423a-4dcf-9338-a2bc95e91234-config\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034603 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-stats-auth\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034640 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hks8\" (UniqueName: \"kubernetes.io/projected/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-kube-api-access-6hks8\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034666 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e80f1b1d-bd4e-4890-88eb-daf951411754-metrics-tls\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034685 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsb7h\" (UniqueName: \"kubernetes.io/projected/07bc4223-423a-4dcf-9338-a2bc95e91234-kube-api-access-dsb7h\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.034730 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhq9l\" (UniqueName: \"kubernetes.io/projected/349c2ebc-4077-42b4-b295-41d0a3a18e74-kube-api-access-bhq9l\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036454 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-certificates\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8jm\" (UniqueName: \"kubernetes.io/projected/32975314-a63c-4c90-a5f6-6bee14a860c8-kube-api-access-mj8jm\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036749 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/630057a4-ba0a-485b-8ac1-0113c42a9fe5-secret-volume\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036828 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-oauth-config\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036852 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8cd59dba-6eb2-498f-b659-f4710a2da4b4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bb2lf\" (UID: \"8cd59dba-6eb2-498f-b659-f4710a2da4b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036874 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b40f713-c236-4a45-8368-be3bb94cd428-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-trusted-ca\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036975 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-trusted-ca-bundle\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.036994 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac114e18-3e28-463f-ad3c-38ae077fdac1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pz6qq\" (UID: \"ac114e18-3e28-463f-ad3c-38ae077fdac1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037010 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32975314-a63c-4c90-a5f6-6bee14a860c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037030 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1266164c-6204-478a-9d2b-7f4a54cd42fa-config\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037049 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-bound-sa-token\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037070 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4257778-cdcb-4430-beb7-a47766082129-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037131 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1266164c-6204-478a-9d2b-7f4a54cd42fa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037160 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037205 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4257778-cdcb-4430-beb7-a47766082129-images\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037231 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0904103-6105-41fd-b158-2f8a5a99b773-config-volume\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037277 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-config\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037331 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5np54\" (UniqueName: \"kubernetes.io/projected/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-kube-api-access-5np54\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037355 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-registration-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037384 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037445 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-config\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037505 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n9fg\" (UniqueName: \"kubernetes.io/projected/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-kube-api-access-2n9fg\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037557 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037617 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f37838-e5ab-461e-833e-d07b0bf13cf3-config\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5f37838-e5ab-461e-833e-d07b0bf13cf3-trusted-ca\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/929cf2ac-1dab-4c49-89ac-243e45f24493-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hvl68\" (UID: \"929cf2ac-1dab-4c49-89ac-243e45f24493\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037697 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-signing-cabundle\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.037721 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmmmc\" (UniqueName: \"kubernetes.io/projected/e3827827-d4d4-4506-8318-6867da12c067-kube-api-access-zmmmc\") pod \"downloads-7954f5f757-l87sx\" (UID: \"e3827827-d4d4-4506-8318-6867da12c067\") " pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.038269 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-ca\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.038415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-trusted-ca\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.039357 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.039569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-config\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.040101 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-trusted-ca-bundle\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.040108 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f37838-e5ab-461e-833e-d07b0bf13cf3-config\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.043258 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-config\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.046496 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ac114e18-3e28-463f-ad3c-38ae077fdac1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-pz6qq\" (UID: \"ac114e18-3e28-463f-ad3c-38ae077fdac1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.048736 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-oauth-config\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.049075 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.058522 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.059960 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-metrics-certs\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060039 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f37838-e5ab-461e-833e-d07b0bf13cf3-serving-cert\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060070 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-client\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060147 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060194 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b40f713-c236-4a45-8368-be3bb94cd428-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060230 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060256 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nw4n\" (UniqueName: \"kubernetes.io/projected/4fac250c-7d1a-435f-a613-8c4646b7be9d-kube-api-access-9nw4n\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060279 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80f1b1d-bd4e-4890-88eb-daf951411754-trusted-ca\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060306 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-client-ca\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060332 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-oauth-serving-cert\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.060358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a0904103-6105-41fd-b158-2f8a5a99b773-metrics-tls\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.062844 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.562825211 +0000 UTC m=+146.685432783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063512 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrtd\" (UniqueName: \"kubernetes.io/projected/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-kube-api-access-dlrtd\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063588 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-serving-cert\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063610 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-config\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063628 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-default-certificate\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063695 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-config\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063714 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-signing-key\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063731 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1266164c-6204-478a-9d2b-7f4a54cd42fa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063748 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349c2ebc-4077-42b4-b295-41d0a3a18e74-serving-cert\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063783 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063799 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw9vv\" (UniqueName: \"kubernetes.io/projected/630057a4-ba0a-485b-8ac1-0113c42a9fe5-kube-api-access-fw9vv\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063830 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b40f713-c236-4a45-8368-be3bb94cd428-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063859 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-csi-data-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063880 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-proxy-tls\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063896 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnrgm\" (UniqueName: \"kubernetes.io/projected/48133237-eb56-4344-8fb4-8e61ce32bf37-kube-api-access-lnrgm\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063913 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txmkr\" (UniqueName: \"kubernetes.io/projected/c5f37838-e5ab-461e-833e-d07b0bf13cf3-kube-api-access-txmkr\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063928 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c2182271-6931-405d-b230-a47f12606828-srv-cert\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063971 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-service-ca\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.063989 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064006 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7krd\" (UniqueName: \"kubernetes.io/projected/4c483458-0e51-4a45-86bc-df13cc609b9d-kube-api-access-w7krd\") pod \"control-plane-machine-set-operator-78cbb6b69f-xlgtc\" (UID: \"4c483458-0e51-4a45-86bc-df13cc609b9d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064027 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbw84\" (UniqueName: \"kubernetes.io/projected/974d36e1-ff64-4ad8-9bd9-0efef426c97d-kube-api-access-gbw84\") pod \"migrator-59844c95c7-rhb7b\" (UID: \"974d36e1-ff64-4ad8-9bd9-0efef426c97d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064043 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e80f1b1d-bd4e-4890-88eb-daf951411754-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064059 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlgn5\" (UniqueName: \"kubernetes.io/projected/2904c307-27ef-43a1-8913-a24e9ad16aa0-kube-api-access-rlgn5\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064076 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07bc4223-423a-4dcf-9338-a2bc95e91234-serving-cert\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48133237-eb56-4344-8fb4-8e61ce32bf37-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064129 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32975314-a63c-4c90-a5f6-6bee14a860c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064148 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-plugins-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064162 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/630057a4-ba0a-485b-8ac1-0113c42a9fe5-config-volume\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064178 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064200 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6p72\" (UniqueName: \"kubernetes.io/projected/8cd59dba-6eb2-498f-b659-f4710a2da4b4-kube-api-access-h6p72\") pod \"multus-admission-controller-857f4d67dd-bb2lf\" (UID: \"8cd59dba-6eb2-498f-b659-f4710a2da4b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064220 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-mountpoint-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064234 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fkd\" (UniqueName: \"kubernetes.io/projected/c2182271-6931-405d-b230-a47f12606828-kube-api-access-k2fkd\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064259 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-socket-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064287 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-tls\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064307 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cdeebf50-e2da-438a-b872-64c4a8d43d6e-tmpfs\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064325 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4bt\" (UniqueName: \"kubernetes.io/projected/985b208d-91e2-4e10-b919-0ef77ba89163-kube-api-access-kw4bt\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064346 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2d5z\" (UniqueName: \"kubernetes.io/projected/ac114e18-3e28-463f-ad3c-38ae077fdac1-kube-api-access-l2d5z\") pod \"cluster-samples-operator-665b6dd947-pz6qq\" (UID: \"ac114e18-3e28-463f-ad3c-38ae077fdac1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064339 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr677\" (UniqueName: \"kubernetes.io/projected/7fdd24fe-710e-4452-a48a-1d59910c78e3-kube-api-access-jr677\") pod \"dns-operator-744455d44c-7679s\" (UID: \"7fdd24fe-710e-4452-a48a-1d59910c78e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064364 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4257778-cdcb-4430-beb7-a47766082129-proxy-tls\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064380 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2904c307-27ef-43a1-8913-a24e9ad16aa0-node-bootstrap-token\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064399 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-client-ca\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-service-ca\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064433 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/985b208d-91e2-4e10-b919-0ef77ba89163-service-ca-bundle\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064452 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a374f1-02f7-4092-8027-e1967bf9190f-serving-cert\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064548 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-client-ca\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064627 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhq9l\" (UniqueName: \"kubernetes.io/projected/349c2ebc-4077-42b4-b295-41d0a3a18e74-kube-api-access-bhq9l\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2904c307-27ef-43a1-8913-a24e9ad16aa0-certs\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vb7d\" (UniqueName: \"kubernetes.io/projected/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-kube-api-access-9vb7d\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064793 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmskt\" (UniqueName: \"kubernetes.io/projected/80a374f1-02f7-4092-8027-e1967bf9190f-kube-api-access-fmskt\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064821 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7fdd24fe-710e-4452-a48a-1d59910c78e3-metrics-tls\") pod \"dns-operator-744455d44c-7679s\" (UID: \"7fdd24fe-710e-4452-a48a-1d59910c78e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064837 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdeebf50-e2da-438a-b872-64c4a8d43d6e-apiservice-cert\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.064852 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c2182271-6931-405d-b230-a47f12606828-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.066069 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmmmc\" (UniqueName: \"kubernetes.io/projected/e3827827-d4d4-4506-8318-6867da12c067-kube-api-access-zmmmc\") pod \"downloads-7954f5f757-l87sx\" (UID: \"e3827827-d4d4-4506-8318-6867da12c067\") " pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.066191 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-client\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.066639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-config\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.067459 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-client-ca\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.069675 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-service-ca\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.071144 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/80a374f1-02f7-4092-8027-e1967bf9190f-etcd-service-ca\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.072189 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/80a374f1-02f7-4092-8027-e1967bf9190f-serving-cert\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.072366 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.072626 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.073320 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-oauth-serving-cert\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.073464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7fdd24fe-710e-4452-a48a-1d59910c78e3-metrics-tls\") pod \"dns-operator-744455d44c-7679s\" (UID: \"7fdd24fe-710e-4452-a48a-1d59910c78e3\") " pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.073639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f37838-e5ab-461e-833e-d07b0bf13cf3-serving-cert\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.073645 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-config\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.074299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48133237-eb56-4344-8fb4-8e61ce32bf37-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.075935 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-bound-sa-token\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.076123 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvlnk\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-kube-api-access-hvlnk\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.085011 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349c2ebc-4077-42b4-b295-41d0a3a18e74-serving-cert\") pod \"controller-manager-879f6c89f-zscxm\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.085942 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-tls\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.088771 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-serving-cert\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.089030 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7679s" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.090030 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vb7d\" (UniqueName: \"kubernetes.io/projected/8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59-kube-api-access-9vb7d\") pod \"kube-storage-version-migrator-operator-b67b599dd-dwczw\" (UID: \"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.092233 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txmkr\" (UniqueName: \"kubernetes.io/projected/c5f37838-e5ab-461e-833e-d07b0bf13cf3-kube-api-access-txmkr\") pod \"console-operator-58897d9998-f492b\" (UID: \"c5f37838-e5ab-461e-833e-d07b0bf13cf3\") " pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.093623 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nw4n\" (UniqueName: \"kubernetes.io/projected/4fac250c-7d1a-435f-a613-8c4646b7be9d-kube-api-access-9nw4n\") pod \"console-f9d7485db-qnr4c\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.093725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnrgm\" (UniqueName: \"kubernetes.io/projected/48133237-eb56-4344-8fb4-8e61ce32bf37-kube-api-access-lnrgm\") pod \"route-controller-manager-6576b87f9c-p5m6h\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.093743 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2d5z\" (UniqueName: \"kubernetes.io/projected/ac114e18-3e28-463f-ad3c-38ae077fdac1-kube-api-access-l2d5z\") pod \"cluster-samples-operator-665b6dd947-pz6qq\" (UID: \"ac114e18-3e28-463f-ad3c-38ae077fdac1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.121708 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.165902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.166063 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-signing-key\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167266 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1266164c-6204-478a-9d2b-7f4a54cd42fa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167301 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167322 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw9vv\" (UniqueName: \"kubernetes.io/projected/630057a4-ba0a-485b-8ac1-0113c42a9fe5-kube-api-access-fw9vv\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.167392 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.667358019 +0000 UTC m=+146.789965591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167462 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b40f713-c236-4a45-8368-be3bb94cd428-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167490 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-csi-data-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-proxy-tls\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167528 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c2182271-6931-405d-b230-a47f12606828-srv-cert\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7krd\" (UniqueName: \"kubernetes.io/projected/4c483458-0e51-4a45-86bc-df13cc609b9d-kube-api-access-w7krd\") pod \"control-plane-machine-set-operator-78cbb6b69f-xlgtc\" (UID: \"4c483458-0e51-4a45-86bc-df13cc609b9d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167572 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e80f1b1d-bd4e-4890-88eb-daf951411754-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167587 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlgn5\" (UniqueName: \"kubernetes.io/projected/2904c307-27ef-43a1-8913-a24e9ad16aa0-kube-api-access-rlgn5\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167605 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07bc4223-423a-4dcf-9338-a2bc95e91234-serving-cert\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167626 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbw84\" (UniqueName: \"kubernetes.io/projected/974d36e1-ff64-4ad8-9bd9-0efef426c97d-kube-api-access-gbw84\") pod \"migrator-59844c95c7-rhb7b\" (UID: \"974d36e1-ff64-4ad8-9bd9-0efef426c97d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167641 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-plugins-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167658 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/630057a4-ba0a-485b-8ac1-0113c42a9fe5-config-volume\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167681 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167713 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32975314-a63c-4c90-a5f6-6bee14a860c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6p72\" (UniqueName: \"kubernetes.io/projected/8cd59dba-6eb2-498f-b659-f4710a2da4b4-kube-api-access-h6p72\") pod \"multus-admission-controller-857f4d67dd-bb2lf\" (UID: \"8cd59dba-6eb2-498f-b659-f4710a2da4b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167747 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-mountpoint-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167779 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2fkd\" (UniqueName: \"kubernetes.io/projected/c2182271-6931-405d-b230-a47f12606828-kube-api-access-k2fkd\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167801 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-socket-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167816 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw4bt\" (UniqueName: \"kubernetes.io/projected/985b208d-91e2-4e10-b919-0ef77ba89163-kube-api-access-kw4bt\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167836 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cdeebf50-e2da-438a-b872-64c4a8d43d6e-tmpfs\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167856 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4257778-cdcb-4430-beb7-a47766082129-proxy-tls\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167875 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2904c307-27ef-43a1-8913-a24e9ad16aa0-node-bootstrap-token\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167893 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/985b208d-91e2-4e10-b919-0ef77ba89163-service-ca-bundle\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167927 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2904c307-27ef-43a1-8913-a24e9ad16aa0-certs\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167945 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmskt\" (UniqueName: \"kubernetes.io/projected/80a374f1-02f7-4092-8027-e1967bf9190f-kube-api-access-fmskt\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167959 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdeebf50-e2da-438a-b872-64c4a8d43d6e-apiservice-cert\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167972 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c2182271-6931-405d-b230-a47f12606828-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.167992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fefbdaaf-1e48-4731-93e6-285fff94b582-cert\") pod \"ingress-canary-b46nz\" (UID: \"fefbdaaf-1e48-4731-93e6-285fff94b582\") " pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168008 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-profile-collector-cert\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168028 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4phd\" (UniqueName: \"kubernetes.io/projected/e80f1b1d-bd4e-4890-88eb-daf951411754-kube-api-access-t4phd\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168049 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvvm\" (UniqueName: \"kubernetes.io/projected/cdeebf50-e2da-438a-b872-64c4a8d43d6e-kube-api-access-npvvm\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168064 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kbvp\" (UniqueName: \"kubernetes.io/projected/a0904103-6105-41fd-b158-2f8a5a99b773-kube-api-access-7kbvp\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168081 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdeebf50-e2da-438a-b872-64c4a8d43d6e-webhook-cert\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168101 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwzr\" (UniqueName: \"kubernetes.io/projected/c4257778-cdcb-4430-beb7-a47766082129-kube-api-access-2hwzr\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168120 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4c483458-0e51-4a45-86bc-df13cc609b9d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xlgtc\" (UID: \"4c483458-0e51-4a45-86bc-df13cc609b9d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32975314-a63c-4c90-a5f6-6bee14a860c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168159 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168173 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-srv-cert\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168194 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfc5z\" (UniqueName: \"kubernetes.io/projected/6c18f03b-59b4-4759-ae52-198497bc084d-kube-api-access-hfc5z\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168219 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqts\" (UniqueName: \"kubernetes.io/projected/929cf2ac-1dab-4c49-89ac-243e45f24493-kube-api-access-4vqts\") pod \"package-server-manager-789f6589d5-hvl68\" (UID: \"929cf2ac-1dab-4c49-89ac-243e45f24493\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168244 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjg4b\" (UniqueName: \"kubernetes.io/projected/fefbdaaf-1e48-4731-93e6-285fff94b582-kube-api-access-wjg4b\") pod \"ingress-canary-b46nz\" (UID: \"fefbdaaf-1e48-4731-93e6-285fff94b582\") " pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168258 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bc4223-423a-4dcf-9338-a2bc95e91234-config\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168274 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-stats-auth\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168291 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e80f1b1d-bd4e-4890-88eb-daf951411754-metrics-tls\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168309 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsb7h\" (UniqueName: \"kubernetes.io/projected/07bc4223-423a-4dcf-9338-a2bc95e91234-kube-api-access-dsb7h\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168329 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hks8\" (UniqueName: \"kubernetes.io/projected/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-kube-api-access-6hks8\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168350 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8jm\" (UniqueName: \"kubernetes.io/projected/32975314-a63c-4c90-a5f6-6bee14a860c8-kube-api-access-mj8jm\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168365 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/630057a4-ba0a-485b-8ac1-0113c42a9fe5-secret-volume\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168382 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8cd59dba-6eb2-498f-b659-f4710a2da4b4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bb2lf\" (UID: \"8cd59dba-6eb2-498f-b659-f4710a2da4b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168402 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b40f713-c236-4a45-8368-be3bb94cd428-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168422 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32975314-a63c-4c90-a5f6-6bee14a860c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168439 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1266164c-6204-478a-9d2b-7f4a54cd42fa-config\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168456 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4257778-cdcb-4430-beb7-a47766082129-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168471 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1266164c-6204-478a-9d2b-7f4a54cd42fa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168493 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4257778-cdcb-4430-beb7-a47766082129-images\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0904103-6105-41fd-b158-2f8a5a99b773-config-volume\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168528 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5np54\" (UniqueName: \"kubernetes.io/projected/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-kube-api-access-5np54\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168543 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-registration-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168558 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-config\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168578 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168595 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n9fg\" (UniqueName: \"kubernetes.io/projected/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-kube-api-access-2n9fg\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168609 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168627 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-signing-cabundle\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168644 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/929cf2ac-1dab-4c49-89ac-243e45f24493-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hvl68\" (UID: \"929cf2ac-1dab-4c49-89ac-243e45f24493\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-metrics-certs\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168675 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168691 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b40f713-c236-4a45-8368-be3bb94cd428-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168708 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168723 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80f1b1d-bd4e-4890-88eb-daf951411754-trusted-ca\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168742 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168778 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a0904103-6105-41fd-b158-2f8a5a99b773-metrics-tls\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168817 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlrtd\" (UniqueName: \"kubernetes.io/projected/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-kube-api-access-dlrtd\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.168835 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-default-certificate\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.170403 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.171033 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-csi-data-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.172075 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.172890 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b40f713-c236-4a45-8368-be3bb94cd428-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.173492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0904103-6105-41fd-b158-2f8a5a99b773-config-volume\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.175402 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.67538496 +0000 UTC m=+146.797992612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.175860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-registration-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.176006 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e80f1b1d-bd4e-4890-88eb-daf951411754-trusted-ca\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.176455 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07bc4223-423a-4dcf-9338-a2bc95e91234-config\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.176581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-default-certificate\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.176686 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-mountpoint-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.176860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-config\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.181785 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-socket-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.182397 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/cdeebf50-e2da-438a-b872-64c4a8d43d6e-tmpfs\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.186906 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1266164c-6204-478a-9d2b-7f4a54cd42fa-config\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.188265 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c4257778-cdcb-4430-beb7-a47766082129-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.189175 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c4257778-cdcb-4430-beb7-a47766082129-images\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.190089 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/6c18f03b-59b4-4759-ae52-198497bc084d-plugins-dir\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.191308 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-signing-cabundle\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.191866 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c2182271-6931-405d-b230-a47f12606828-srv-cert\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.192387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-srv-cert\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.194325 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6"] Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.197222 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c4257778-cdcb-4430-beb7-a47766082129-proxy-tls\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.228382 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/32975314-a63c-4c90-a5f6-6bee14a860c8-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.229425 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/985b208d-91e2-4e10-b919-0ef77ba89163-service-ca-bundle\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.230386 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-proxy-tls\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.231016 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-profile-collector-cert\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.231102 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-signing-key\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.231725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-metrics-certs\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.233071 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsb7h\" (UniqueName: \"kubernetes.io/projected/07bc4223-423a-4dcf-9338-a2bc95e91234-kube-api-access-dsb7h\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.233466 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.233816 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/985b208d-91e2-4e10-b919-0ef77ba89163-stats-auth\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.233945 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1266164c-6204-478a-9d2b-7f4a54cd42fa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.234699 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fefbdaaf-1e48-4731-93e6-285fff94b582-cert\") pod \"ingress-canary-b46nz\" (UID: \"fefbdaaf-1e48-4731-93e6-285fff94b582\") " pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.235380 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/2904c307-27ef-43a1-8913-a24e9ad16aa0-node-bootstrap-token\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.236013 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hks8\" (UniqueName: \"kubernetes.io/projected/dba786e6-e56c-4818-ab34-9c6ae4ab5a6c-kube-api-access-6hks8\") pod \"machine-config-controller-84d6567774-p8cpw\" (UID: \"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.236440 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqts\" (UniqueName: \"kubernetes.io/projected/929cf2ac-1dab-4c49-89ac-243e45f24493-kube-api-access-4vqts\") pod \"package-server-manager-789f6589d5-hvl68\" (UID: \"929cf2ac-1dab-4c49-89ac-243e45f24493\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.239299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/929cf2ac-1dab-4c49-89ac-243e45f24493-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-hvl68\" (UID: \"929cf2ac-1dab-4c49-89ac-243e45f24493\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.239650 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b40f713-c236-4a45-8368-be3bb94cd428-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.240446 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.240892 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8cd59dba-6eb2-498f-b659-f4710a2da4b4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bb2lf\" (UID: \"8cd59dba-6eb2-498f-b659-f4710a2da4b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.241254 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a0904103-6105-41fd-b158-2f8a5a99b773-metrics-tls\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.242181 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.242352 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/32975314-a63c-4c90-a5f6-6bee14a860c8-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.242939 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e80f1b1d-bd4e-4890-88eb-daf951411754-metrics-tls\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.243097 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n9fg\" (UniqueName: \"kubernetes.io/projected/ff8504d4-729d-4bd7-bc4e-cc681c4c8a34-kube-api-access-2n9fg\") pod \"catalog-operator-68c6474976-mxn96\" (UID: \"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.243580 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/2904c307-27ef-43a1-8913-a24e9ad16aa0-certs\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.243748 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.243811 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw4bt\" (UniqueName: \"kubernetes.io/projected/985b208d-91e2-4e10-b919-0ef77ba89163-kube-api-access-kw4bt\") pod \"router-default-5444994796-k47rq\" (UID: \"985b208d-91e2-4e10-b919-0ef77ba89163\") " pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.244671 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8jm\" (UniqueName: \"kubernetes.io/projected/32975314-a63c-4c90-a5f6-6bee14a860c8-kube-api-access-mj8jm\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.245035 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.250523 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlrtd\" (UniqueName: \"kubernetes.io/projected/5525eaf6-8a16-42f6-af0c-b2188a09fb5a-kube-api-access-dlrtd\") pod \"service-ca-9c57cc56f-2lngl\" (UID: \"5525eaf6-8a16-42f6-af0c-b2188a09fb5a\") " pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.253008 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.254529 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.254940 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4phd\" (UniqueName: \"kubernetes.io/projected/e80f1b1d-bd4e-4890-88eb-daf951411754-kube-api-access-t4phd\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.254953 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjg4b\" (UniqueName: \"kubernetes.io/projected/fefbdaaf-1e48-4731-93e6-285fff94b582-kube-api-access-wjg4b\") pod \"ingress-canary-b46nz\" (UID: \"fefbdaaf-1e48-4731-93e6-285fff94b582\") " pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.255518 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2fkd\" (UniqueName: \"kubernetes.io/projected/c2182271-6931-405d-b230-a47f12606828-kube-api-access-k2fkd\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.256239 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b40f713-c236-4a45-8368-be3bb94cd428-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9qvs7\" (UID: \"8b40f713-c236-4a45-8368-be3bb94cd428\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.258815 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-d8j6n\" (UID: \"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.262991 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e80f1b1d-bd4e-4890-88eb-daf951411754-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zstqj\" (UID: \"e80f1b1d-bd4e-4890-88eb-daf951411754\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.262991 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwzr\" (UniqueName: \"kubernetes.io/projected/c4257778-cdcb-4430-beb7-a47766082129-kube-api-access-2hwzr\") pod \"machine-config-operator-74547568cd-5vtr4\" (UID: \"c4257778-cdcb-4430-beb7-a47766082129\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.263917 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfc5z\" (UniqueName: \"kubernetes.io/projected/6c18f03b-59b4-4759-ae52-198497bc084d-kube-api-access-hfc5z\") pod \"csi-hostpathplugin-d79m7\" (UID: \"6c18f03b-59b4-4759-ae52-198497bc084d\") " pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.267374 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cdeebf50-e2da-438a-b872-64c4a8d43d6e-apiservice-cert\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.270057 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xmsxh"] Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.271078 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvvm\" (UniqueName: \"kubernetes.io/projected/cdeebf50-e2da-438a-b872-64c4a8d43d6e-kube-api-access-npvvm\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.271565 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbw84\" (UniqueName: \"kubernetes.io/projected/974d36e1-ff64-4ad8-9bd9-0efef426c97d-kube-api-access-gbw84\") pod \"migrator-59844c95c7-rhb7b\" (UID: \"974d36e1-ff64-4ad8-9bd9-0efef426c97d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.271953 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.272217 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmskt\" (UniqueName: \"kubernetes.io/projected/80a374f1-02f7-4092-8027-e1967bf9190f-kube-api-access-fmskt\") pod \"etcd-operator-b45778765-d87wj\" (UID: \"80a374f1-02f7-4092-8027-e1967bf9190f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.272426 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.772405439 +0000 UTC m=+146.895013011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.274856 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kbvp\" (UniqueName: \"kubernetes.io/projected/a0904103-6105-41fd-b158-2f8a5a99b773-kube-api-access-7kbvp\") pod \"dns-default-psxlr\" (UID: \"a0904103-6105-41fd-b158-2f8a5a99b773\") " pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.283899 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.289124 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.300162 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdeebf50-e2da-438a-b872-64c4a8d43d6e-webhook-cert\") pod \"packageserver-d55dfcdfc-nw9fc\" (UID: \"cdeebf50-e2da-438a-b872-64c4a8d43d6e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.300310 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlgn5\" (UniqueName: \"kubernetes.io/projected/2904c307-27ef-43a1-8913-a24e9ad16aa0-kube-api-access-rlgn5\") pod \"machine-config-server-fm29j\" (UID: \"2904c307-27ef-43a1-8913-a24e9ad16aa0\") " pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.314450 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.314931 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07bc4223-423a-4dcf-9338-a2bc95e91234-serving-cert\") pod \"service-ca-operator-777779d784-zst5x\" (UID: \"07bc4223-423a-4dcf-9338-a2bc95e91234\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.315395 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.324963 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-b46nz" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.314604 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-fm29j" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.329396 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.338572 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.343079 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.364100 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.376509 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.876494384 +0000 UTC m=+146.999101956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.376594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.377892 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw"] Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.405406 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.412216 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.419561 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7679s"] Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.427356 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.436923 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.442205 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.449479 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.455970 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.474686 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.478337 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.478553 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.978515651 +0000 UTC m=+147.101123223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.478902 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.479485 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:13.979437797 +0000 UTC m=+147.102045369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.490881 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.516903 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.543679 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.561025 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.580177 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.580450 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.080404455 +0000 UTC m=+147.203012037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.581047 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.581631 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.081607448 +0000 UTC m=+147.204215020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.683038 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.683261 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.183215415 +0000 UTC m=+147.305823007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.683482 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.683970 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.183949705 +0000 UTC m=+147.306557497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.761793 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/32975314-a63c-4c90-a5f6-6bee14a860c8-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-smn8v\" (UID: \"32975314-a63c-4c90-a5f6-6bee14a860c8\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.761337 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.762744 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/630057a4-ba0a-485b-8ac1-0113c42a9fe5-secret-volume\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.762856 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.763137 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/630057a4-ba0a-485b-8ac1-0113c42a9fe5-config-volume\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.763467 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7krd\" (UniqueName: \"kubernetes.io/projected/4c483458-0e51-4a45-86bc-df13cc609b9d-kube-api-access-w7krd\") pod \"control-plane-machine-set-operator-78cbb6b69f-xlgtc\" (UID: \"4c483458-0e51-4a45-86bc-df13cc609b9d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.764704 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/4c483458-0e51-4a45-86bc-df13cc609b9d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-xlgtc\" (UID: \"4c483458-0e51-4a45-86bc-df13cc609b9d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.765158 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5np54\" (UniqueName: \"kubernetes.io/projected/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-kube-api-access-5np54\") pod \"marketplace-operator-79b997595-h498b\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.766569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw9vv\" (UniqueName: \"kubernetes.io/projected/630057a4-ba0a-485b-8ac1-0113c42a9fe5-kube-api-access-fw9vv\") pod \"collect-profiles-29416500-jps28\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.766646 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c2182271-6931-405d-b230-a47f12606828-profile-collector-cert\") pod \"olm-operator-6b444d44fb-gckck\" (UID: \"c2182271-6931-405d-b230-a47f12606828\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.767216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1266164c-6204-478a-9d2b-7f4a54cd42fa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-fws9v\" (UID: \"1266164c-6204-478a-9d2b-7f4a54cd42fa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.768514 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6p72\" (UniqueName: \"kubernetes.io/projected/8cd59dba-6eb2-498f-b659-f4710a2da4b4-kube-api-access-h6p72\") pod \"multus-admission-controller-857f4d67dd-bb2lf\" (UID: \"8cd59dba-6eb2-498f-b659-f4710a2da4b4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: W1206 03:08:13.778986 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3916e25_63a1_4aac_a9c6_75a5c6d4ee51.slice/crio-ef2ef0bf3e9a50e14b15ecfcee271519122f5c8fdba674dd34acf3f173359c4f WatchSource:0}: Error finding container ef2ef0bf3e9a50e14b15ecfcee271519122f5c8fdba674dd34acf3f173359c4f: Status 404 returned error can't find the container with id ef2ef0bf3e9a50e14b15ecfcee271519122f5c8fdba674dd34acf3f173359c4f Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.782698 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.785075 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.785505 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.285486079 +0000 UTC m=+147.408093651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.799795 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.810515 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.827102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:13 crc kubenswrapper[4801]: W1206 03:08:13.838621 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e8d4ca1_cd89_4ca4_a51a_84ff37dd5d59.slice/crio-031484613eadca3b2fdcf962479be477a9550e2f8a0ab39aa159c14fc8869b72 WatchSource:0}: Error finding container 031484613eadca3b2fdcf962479be477a9550e2f8a0ab39aa159c14fc8869b72: Status 404 returned error can't find the container with id 031484613eadca3b2fdcf962479be477a9550e2f8a0ab39aa159c14fc8869b72 Dec 06 03:08:13 crc kubenswrapper[4801]: W1206 03:08:13.839703 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fdd24fe_710e_4452_a48a_1d59910c78e3.slice/crio-829223fca3e00ed46bbf5ed80a5eb46fa249ebe8e26816bfd45acd64f4227e22 WatchSource:0}: Error finding container 829223fca3e00ed46bbf5ed80a5eb46fa249ebe8e26816bfd45acd64f4227e22: Status 404 returned error can't find the container with id 829223fca3e00ed46bbf5ed80a5eb46fa249ebe8e26816bfd45acd64f4227e22 Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.888853 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.889468 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.38944138 +0000 UTC m=+147.512048962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.893298 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.922671 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.943307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7679s" event={"ID":"7fdd24fe-710e-4452-a48a-1d59910c78e3","Type":"ContainerStarted","Data":"829223fca3e00ed46bbf5ed80a5eb46fa249ebe8e26816bfd45acd64f4227e22"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.946716 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" event={"ID":"3a0a30cb-3dee-44de-a8c3-affda5cb644a","Type":"ContainerStarted","Data":"dd92f3486fa947971b55fd61b32289cd994ad2d78290875f09156ae44520a990"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.948377 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" event={"ID":"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51","Type":"ContainerStarted","Data":"ef2ef0bf3e9a50e14b15ecfcee271519122f5c8fdba674dd34acf3f173359c4f"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.950218 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" event={"ID":"1c50977b-ea29-4832-927a-64352613ccd9","Type":"ContainerStarted","Data":"4855e0955437b67c81f3aed80e032b3f3aabe185eb9f2ec335ff1797679dd67d"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.952106 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" event={"ID":"76b3d36e-5cdb-40d7-b0e9-34e712c61d13","Type":"ContainerStarted","Data":"fb72a8e53b7df364e1ac7e3e10a508ffa7fadbe8cc05ff9e46ec346d64242a8a"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.955090 4801 generic.go:334] "Generic (PLEG): container finished" podID="70437be2-9089-427f-8daa-22a299ed14b8" containerID="5ede06ea16c514bc18e633b727cf180aa6a3de995d95cb2eac5a890d3493c6cf" exitCode=0 Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.955195 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" event={"ID":"70437be2-9089-427f-8daa-22a299ed14b8","Type":"ContainerDied","Data":"5ede06ea16c514bc18e633b727cf180aa6a3de995d95cb2eac5a890d3493c6cf"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.957397 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" event={"ID":"d5e2010c-d755-4f50-b5de-799ab1c30e5a","Type":"ContainerStarted","Data":"945da8a5553541e6da569bec8ff340ff1dc542bfefcaa082624397e0833565af"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.959056 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" event={"ID":"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59","Type":"ContainerStarted","Data":"031484613eadca3b2fdcf962479be477a9550e2f8a0ab39aa159c14fc8869b72"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.960954 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" event={"ID":"5b9771c2-4f3e-4c26-ad26-fa67911f1169","Type":"ContainerStarted","Data":"4932f68c1ccbb0f462a90b3b69768102bb12b525560e94dc089a84aa99c9fefa"} Dec 06 03:08:13 crc kubenswrapper[4801]: I1206 03:08:13.990160 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:13 crc kubenswrapper[4801]: E1206 03:08:13.990908 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.490878831 +0000 UTC m=+147.613486443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.033175 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.063952 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.068726 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw"] Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.091903 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.092299 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.592285582 +0000 UTC m=+147.714893154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.193424 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.193609 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.693583399 +0000 UTC m=+147.816190971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.193727 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.194926 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.694910896 +0000 UTC m=+147.817518468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.295378 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.295518 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.795491334 +0000 UTC m=+147.918098906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.295658 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.295966 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.795956756 +0000 UTC m=+147.918564328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.396468 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.396665 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.896597366 +0000 UTC m=+148.019204978 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.396883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.397334 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.897312236 +0000 UTC m=+148.019919908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: W1206 03:08:14.397545 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddba786e6_e56c_4818_ab34_9c6ae4ab5a6c.slice/crio-2515bc48ee55e736fdba81e41016db1d8ad267f6ea6f6df65cf53c73aec43093 WatchSource:0}: Error finding container 2515bc48ee55e736fdba81e41016db1d8ad267f6ea6f6df65cf53c73aec43093: Status 404 returned error can't find the container with id 2515bc48ee55e736fdba81e41016db1d8ad267f6ea6f6df65cf53c73aec43093 Dec 06 03:08:14 crc kubenswrapper[4801]: W1206 03:08:14.433573 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-14463cb0282167dbc3b1eaa289cc6b8c4718d2201bdca24434d85e9c24085665 WatchSource:0}: Error finding container 14463cb0282167dbc3b1eaa289cc6b8c4718d2201bdca24434d85e9c24085665: Status 404 returned error can't find the container with id 14463cb0282167dbc3b1eaa289cc6b8c4718d2201bdca24434d85e9c24085665 Dec 06 03:08:14 crc kubenswrapper[4801]: W1206 03:08:14.446006 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2904c307_27ef_43a1_8913_a24e9ad16aa0.slice/crio-5b627c90e70b3e440dc50609152f1f60808e8ae4440deae779177dd654042b26 WatchSource:0}: Error finding container 5b627c90e70b3e440dc50609152f1f60808e8ae4440deae779177dd654042b26: Status 404 returned error can't find the container with id 5b627c90e70b3e440dc50609152f1f60808e8ae4440deae779177dd654042b26 Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.498592 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.498918 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.998882691 +0000 UTC m=+148.121490263 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.499185 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.499590 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:14.999577401 +0000 UTC m=+148.122184963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.601738 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.602195 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.102179934 +0000 UTC m=+148.224787506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.624417 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96"] Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.646317 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.703740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.704130 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.204111239 +0000 UTC m=+148.326718801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.805138 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.805724 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.305708094 +0000 UTC m=+148.428315656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.907130 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:14 crc kubenswrapper[4801]: E1206 03:08:14.907824 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.407800034 +0000 UTC m=+148.530407636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.975354 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fm29j" event={"ID":"2904c307-27ef-43a1-8913-a24e9ad16aa0","Type":"ContainerStarted","Data":"5b627c90e70b3e440dc50609152f1f60808e8ae4440deae779177dd654042b26"} Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.980395 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k47rq" event={"ID":"985b208d-91e2-4e10-b919-0ef77ba89163","Type":"ContainerStarted","Data":"a1dd31e4f1b62f841f5039d49d04af7cf0735fffc90ecb7dca429262e978990e"} Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.981565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" event={"ID":"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34","Type":"ContainerStarted","Data":"e9e17278b76c5988f80eeca843ded539520d835c736fa0688a33874cb9e521f6"} Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.984533 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" event={"ID":"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c","Type":"ContainerStarted","Data":"2515bc48ee55e736fdba81e41016db1d8ad267f6ea6f6df65cf53c73aec43093"} Dec 06 03:08:14 crc kubenswrapper[4801]: I1206 03:08:14.993239 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"14463cb0282167dbc3b1eaa289cc6b8c4718d2201bdca24434d85e9c24085665"} Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.014681 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.014921 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.514890101 +0000 UTC m=+148.637497673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.015352 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.015912 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.515894268 +0000 UTC m=+148.638501840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.116356 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.116932 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.616879748 +0000 UTC m=+148.739487320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.218503 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.218911 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.718891285 +0000 UTC m=+148.841498857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.320319 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.320671 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.820654425 +0000 UTC m=+148.943261987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.421466 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.421936 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:15.921916732 +0000 UTC m=+149.044524304 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.522814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.523177 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.023158698 +0000 UTC m=+149.145766270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.614718 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zst5x"] Dec 06 03:08:15 crc kubenswrapper[4801]: W1206 03:08:15.623280 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07bc4223_423a_4dcf_9338_a2bc95e91234.slice/crio-44aa8146a9a78706ee74671f59a30ba3c95d33f71b2c057e1c063b23e785fd05 WatchSource:0}: Error finding container 44aa8146a9a78706ee74671f59a30ba3c95d33f71b2c057e1c063b23e785fd05: Status 404 returned error can't find the container with id 44aa8146a9a78706ee74671f59a30ba3c95d33f71b2c057e1c063b23e785fd05 Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.624031 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.624419 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.124405754 +0000 UTC m=+149.247013326 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.631840 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-d87wj"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.636890 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qnr4c"] Dec 06 03:08:15 crc kubenswrapper[4801]: W1206 03:08:15.638776 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80a374f1_02f7_4092_8027_e1967bf9190f.slice/crio-0cc98cace78fbd61fdf204595a48ffb877dbe6b83dbd8c84c86cebd200451d2c WatchSource:0}: Error finding container 0cc98cace78fbd61fdf204595a48ffb877dbe6b83dbd8c84c86cebd200451d2c: Status 404 returned error can't find the container with id 0cc98cace78fbd61fdf204595a48ffb877dbe6b83dbd8c84c86cebd200451d2c Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.647098 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-l87sx"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.648457 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zscxm"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.672160 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h498b"] Dec 06 03:08:15 crc kubenswrapper[4801]: W1206 03:08:15.680686 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93dc3a8f_a772_4d28_89d6_3253b6c51aa3.slice/crio-86b5ec4eba04be5f0ccd29d7c5c0b458af48581aff0f14efc571baeea8d1993c WatchSource:0}: Error finding container 86b5ec4eba04be5f0ccd29d7c5c0b458af48581aff0f14efc571baeea8d1993c: Status 404 returned error can't find the container with id 86b5ec4eba04be5f0ccd29d7c5c0b458af48581aff0f14efc571baeea8d1993c Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.725428 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.725862 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.225823065 +0000 UTC m=+149.348430637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.774805 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-b46nz"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.778801 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.790879 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.809427 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-d79m7"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.831972 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.832497 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.332479691 +0000 UTC m=+149.455087263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.937827 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:15 crc kubenswrapper[4801]: E1206 03:08:15.938420 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.438402225 +0000 UTC m=+149.561009797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.943052 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.945930 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.961765 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v"] Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.997881 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" event={"ID":"93dc3a8f-a772-4d28-89d6-3253b6c51aa3","Type":"ContainerStarted","Data":"86b5ec4eba04be5f0ccd29d7c5c0b458af48581aff0f14efc571baeea8d1993c"} Dec 06 03:08:15 crc kubenswrapper[4801]: I1206 03:08:15.999097 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" event={"ID":"6c18f03b-59b4-4759-ae52-198497bc084d","Type":"ContainerStarted","Data":"10f99f59797f2cfcdf3ac582638f3cf023f66ab61086383262314aa28c82e341"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.000336 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnr4c" event={"ID":"4fac250c-7d1a-435f-a613-8c4646b7be9d","Type":"ContainerStarted","Data":"fbe134151129cd4ae5408beae35c33989a980d62dff12ccdfe04947cf1ead24c"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.001423 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" event={"ID":"8b40f713-c236-4a45-8368-be3bb94cd428","Type":"ContainerStarted","Data":"2b76af6193b9e8242818dfb25de744029e3ef403e81d821a47ecb8f557d562be"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.002399 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b46nz" event={"ID":"fefbdaaf-1e48-4731-93e6-285fff94b582","Type":"ContainerStarted","Data":"e1b027c9ac2f1c90c04fa1795322441206390c342e897efa934d3c5bb2b2be51"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.003340 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" event={"ID":"07bc4223-423a-4dcf-9338-a2bc95e91234","Type":"ContainerStarted","Data":"44aa8146a9a78706ee74671f59a30ba3c95d33f71b2c057e1c063b23e785fd05"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.004259 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" event={"ID":"349c2ebc-4077-42b4-b295-41d0a3a18e74","Type":"ContainerStarted","Data":"fcb4455830739c34f9e4c260a882f345846e0a23769dc4aade2fe16baecf1902"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.005539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l87sx" event={"ID":"e3827827-d4d4-4506-8318-6867da12c067","Type":"ContainerStarted","Data":"01d4bbe5ead8c3d27029297a4165db976b7536be00d6a658698dc4e26257eb1e"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.006677 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" event={"ID":"80a374f1-02f7-4092-8027-e1967bf9190f","Type":"ContainerStarted","Data":"0cc98cace78fbd61fdf204595a48ffb877dbe6b83dbd8c84c86cebd200451d2c"} Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.040034 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.040459 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.540444203 +0000 UTC m=+149.663051775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.120405 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-dqs5h" podStartSLOduration=127.120385523 podStartE2EDuration="2m7.120385523s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:16.024991278 +0000 UTC m=+149.147598890" watchObservedRunningTime="2025-12-06 03:08:16.120385523 +0000 UTC m=+149.242993095" Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.124891 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.126398 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.136602 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.150935 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.151393 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.651365096 +0000 UTC m=+149.773972668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.153369 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.155715 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2lngl"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.159079 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.166198 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.168274 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-psxlr"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.169087 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-f492b"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.172330 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.178723 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bb2lf"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.181315 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.181545 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4"] Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.253657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.254174 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.754156325 +0000 UTC m=+149.876763897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.354553 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.354727 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.854699042 +0000 UTC m=+149.977306624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.355180 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.355609 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.855598236 +0000 UTC m=+149.978205818 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.457917 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.458088 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.958060526 +0000 UTC m=+150.080668098 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.458154 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.458485 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:16.958478108 +0000 UTC m=+150.081085680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.561378 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.561836 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.061819962 +0000 UTC m=+150.184427534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.663692 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.664781 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.164739474 +0000 UTC m=+150.287347046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.765571 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.766066 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.266046001 +0000 UTC m=+150.388653583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.866958 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.867372 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.367356 +0000 UTC m=+150.489963572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:16 crc kubenswrapper[4801]: I1206 03:08:16.968422 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:16 crc kubenswrapper[4801]: E1206 03:08:16.968877 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.468854243 +0000 UTC m=+150.591461815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.069448 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" event={"ID":"32975314-a63c-4c90-a5f6-6bee14a860c8","Type":"ContainerStarted","Data":"e093432b0c01027c4f15da011c7a89c9cc4bc037196ad694430560a7ed9458c7"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.072631 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" event={"ID":"5b9771c2-4f3e-4c26-ad26-fa67911f1169","Type":"ContainerStarted","Data":"8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.072954 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.073418 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.57340327 +0000 UTC m=+150.696010842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.089735 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7679s" event={"ID":"7fdd24fe-710e-4452-a48a-1d59910c78e3","Type":"ContainerStarted","Data":"9598eef213fd167cfc3a386954d8bcb73b62ba48b7dd765a5a5df2b9cd5a63ec"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.098838 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" event={"ID":"80a374f1-02f7-4092-8027-e1967bf9190f","Type":"ContainerStarted","Data":"d501aa29467a9cc15ba5c9c52dff6852c2acf855168abb61ba3612606f84e86c"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.107402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" event={"ID":"1266164c-6204-478a-9d2b-7f4a54cd42fa","Type":"ContainerStarted","Data":"21aaf9eba31affc88e8d0b28f99fb2d809942cab22281c04ec37727678ae261f"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.115992 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" event={"ID":"76b3d36e-5cdb-40d7-b0e9-34e712c61d13","Type":"ContainerStarted","Data":"a636b0679424ac85f3b66a42f8762c27a6fe638de213b420ddabd5abb1f5e484"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.123175 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" event={"ID":"d5e2010c-d755-4f50-b5de-799ab1c30e5a","Type":"ContainerStarted","Data":"f5af59ff342a27521a7a7e93ceaef3dabd1629b57060e61f9c6a8c6c191a3292"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.133490 4801 generic.go:334] "Generic (PLEG): container finished" podID="a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d" containerID="d76f3c69d50f322819f274c7e3a9869050f23f3ab6aeb490f1e916a29e59c1f0" exitCode=0 Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.133620 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" event={"ID":"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d","Type":"ContainerDied","Data":"d76f3c69d50f322819f274c7e3a9869050f23f3ab6aeb490f1e916a29e59c1f0"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.161729 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" event={"ID":"93dc3a8f-a772-4d28-89d6-3253b6c51aa3","Type":"ContainerStarted","Data":"829e91a399b6bd3f5e3303b7b6418aac772ec01a8ff729a47e368455b9acc789"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.174131 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" event={"ID":"8cd59dba-6eb2-498f-b659-f4710a2da4b4","Type":"ContainerStarted","Data":"37c0896f0162e4be6d7349f3ca12a8e5d5406fbf1ec1e6b33e426543ef74e5d3"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.176601 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.176798 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.676748504 +0000 UTC m=+150.799356076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.176887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.177254 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.677245458 +0000 UTC m=+150.799853030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.237930 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f492b" event={"ID":"c5f37838-e5ab-461e-833e-d07b0bf13cf3","Type":"ContainerStarted","Data":"9b301d36f688a8af8d5f2766a098084275ebf21d8ab4dc48544144c0879d9123"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.237979 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" event={"ID":"4c483458-0e51-4a45-86bc-df13cc609b9d","Type":"ContainerStarted","Data":"25e69441253672b7b6041093ee04518a955b7b470dbeca931c157fe6cc6ea27c"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.237992 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" event={"ID":"630057a4-ba0a-485b-8ac1-0113c42a9fe5","Type":"ContainerStarted","Data":"80784d9c614f23a2f51d751b2ac5efe74c8111356a8ea3196cc8e410bff6da7a"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.238005 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" event={"ID":"ff8504d4-729d-4bd7-bc4e-cc681c4c8a34","Type":"ContainerStarted","Data":"fd4b703d4463c2bc37158d5b42fbaee7c190073a00c6c4a263be4b532769be39"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.238020 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" event={"ID":"07bc4223-423a-4dcf-9338-a2bc95e91234","Type":"ContainerStarted","Data":"ade56ef560b37b6ca9cf4f4dc76db8e298b68c6ff449a78293af2a7a935cce08"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.239828 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k47rq" event={"ID":"985b208d-91e2-4e10-b919-0ef77ba89163","Type":"ContainerStarted","Data":"d567c38475d9f057288191cd8e9f6abb613470746b53d29d737d321c1a45aa86"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.242029 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" event={"ID":"929cf2ac-1dab-4c49-89ac-243e45f24493","Type":"ContainerStarted","Data":"051905ba753c2a55ce13d3c07b2438b12362478f743fbcceb763d07c45ae7f4a"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.247612 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" event={"ID":"3a0a30cb-3dee-44de-a8c3-affda5cb644a","Type":"ContainerStarted","Data":"c22244f1b5fb26444f7b9dd3d635f03d5b1fba2401de6790c37a0d78cf4ab831"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.248824 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" event={"ID":"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c","Type":"ContainerStarted","Data":"fbce76a3c25df63e1ed7eec6b1bdd1c16f6f76d8915ee5c0cad897caae0ec39f"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.249660 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" event={"ID":"c2182271-6931-405d-b230-a47f12606828","Type":"ContainerStarted","Data":"8cad8a475b91462089ff3ad1c1b1dedf02348b91dddee8631c6c1528ae7c7eac"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.253609 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnr4c" event={"ID":"4fac250c-7d1a-435f-a613-8c4646b7be9d","Type":"ContainerStarted","Data":"6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.254257 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2eef7aa1bf9f31de449d8b0667249faed94b0a1b6b788035c637852a30adeb3a"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.254922 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" event={"ID":"ac114e18-3e28-463f-ad3c-38ae077fdac1","Type":"ContainerStarted","Data":"be9c37ae31bb21b3c3f013c6f062f3040dc1364370b0b7cba232863165808219"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.256612 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" event={"ID":"349c2ebc-4077-42b4-b295-41d0a3a18e74","Type":"ContainerStarted","Data":"63e39f06ae2cac27b0b40ebb5dfa2a9685c8892bc61a4074362b3d858bd78682"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.258112 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" event={"ID":"8e8d4ca1-cd89-4ca4-a51a-84ff37dd5d59","Type":"ContainerStarted","Data":"850038d533bbc07dab653df4d3ffd243ef487d4db054f13906ddb1d27541f674"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.268826 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" event={"ID":"c3916e25-63a1-4aac-a9c6-75a5c6d4ee51","Type":"ContainerStarted","Data":"7eb41d0ef216bab58e50fe65423547dda9a84b7ee0d6fa9714cbdb6032d33908"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.281979 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.282484 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.782458883 +0000 UTC m=+150.905066465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.284740 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"14f341ea65307949de998ad2f58e70aa076979f66956a76e18173aad3588c61d"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.295514 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-fm29j" event={"ID":"2904c307-27ef-43a1-8913-a24e9ad16aa0","Type":"ContainerStarted","Data":"dba6dd51924cd7802f54db20149f7e6ca06cc8c50cb294d9a625764ea8c29890"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.297890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-l87sx" event={"ID":"e3827827-d4d4-4506-8318-6867da12c067","Type":"ContainerStarted","Data":"74287674f97d83b99062e571abe52527a28add50379c2d7fffa7a4db5bc0c944"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.328088 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2b2ea4e26c133d36558806f0ae131266a54e506691d8513373e5c002418f6928"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.339566 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" event={"ID":"974d36e1-ff64-4ad8-9bd9-0efef426c97d","Type":"ContainerStarted","Data":"417169131d14981ff9939df8efeaba9b8feb03149f5312efecd3a8066719f0f8"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.349142 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" event={"ID":"48133237-eb56-4344-8fb4-8e61ce32bf37","Type":"ContainerStarted","Data":"a28a9e456318ca5060a1ab0a5f421fffb30ab9c25ef296b1ae69a01c5e3b33eb"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.351499 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" event={"ID":"c4257778-cdcb-4430-beb7-a47766082129","Type":"ContainerStarted","Data":"9f2204dd6e61eec190414c2cf67b01f97c813a07a8b128adaa8df53c9e539ab6"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.352538 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" event={"ID":"e80f1b1d-bd4e-4890-88eb-daf951411754","Type":"ContainerStarted","Data":"1cc758baa620b0275a6fe2f272c4be7fce16cde0fc737f040659d0b804f8a3da"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.353186 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" event={"ID":"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07","Type":"ContainerStarted","Data":"daa04baa4cbda7c8f75cb8f9d4f7391596826e94ad85c66c753b92a29add37e5"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.353790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-psxlr" event={"ID":"a0904103-6105-41fd-b158-2f8a5a99b773","Type":"ContainerStarted","Data":"72bf9117c18e11a8726bd87ae11ef4fd32aceb2dd56a2952be231f8a48c472ea"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.356827 4801 generic.go:334] "Generic (PLEG): container finished" podID="d58c5185-9cfb-4e5f-956e-d12e12b5e81e" containerID="7e23c2447ea80e961e0ee49071b25bbdf09b3bcc952cb0fa2a1da201b045969f" exitCode=0 Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.356959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" event={"ID":"d58c5185-9cfb-4e5f-956e-d12e12b5e81e","Type":"ContainerDied","Data":"7e23c2447ea80e961e0ee49071b25bbdf09b3bcc952cb0fa2a1da201b045969f"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.362452 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" event={"ID":"cdeebf50-e2da-438a-b872-64c4a8d43d6e","Type":"ContainerStarted","Data":"a9575020143d0fd2f6155cc2d9ab17b25996cabd9cfe3cb0075bf96ef169b410"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.365998 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" event={"ID":"5525eaf6-8a16-42f6-af0c-b2188a09fb5a","Type":"ContainerStarted","Data":"280b280c12ed79bb6836c8139373b498068b2133d129495022fe2b6f27d30a25"} Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.385858 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.386355 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.886333451 +0000 UTC m=+151.008941023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.489985 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.492838 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:17.99274462 +0000 UTC m=+151.115352192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.500396 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.502064 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.002048636 +0000 UTC m=+151.124656198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.602053 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.602719 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.102699106 +0000 UTC m=+151.225306678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.603025 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.603512 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.103492707 +0000 UTC m=+151.226100279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.704288 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.704449 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.204414055 +0000 UTC m=+151.327021617 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.704599 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.705023 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.205014572 +0000 UTC m=+151.327622144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.805862 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.806119 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.306096813 +0000 UTC m=+151.428704385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.806166 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.806651 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.306642679 +0000 UTC m=+151.429250251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.907794 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.907924 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.407886795 +0000 UTC m=+151.530494367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:17 crc kubenswrapper[4801]: I1206 03:08:17.908026 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:17 crc kubenswrapper[4801]: E1206 03:08:17.908375 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.408367147 +0000 UTC m=+151.530974719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.009565 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.009826 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.509784789 +0000 UTC m=+151.632392361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.010169 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.010642 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.510627832 +0000 UTC m=+151.633235404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.044793 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-l87sx" podStartSLOduration=129.044739321 podStartE2EDuration="2m9.044739321s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.040828533 +0000 UTC m=+151.163436105" watchObservedRunningTime="2025-12-06 03:08:18.044739321 +0000 UTC m=+151.167346893" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.057397 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l77v9"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.058580 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: W1206 03:08:18.060081 4801 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.060146 4801 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.085730 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-fm29j" podStartSLOduration=10.085707558 podStartE2EDuration="10.085707558s" podCreationTimestamp="2025-12-06 03:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.067645501 +0000 UTC m=+151.190253083" watchObservedRunningTime="2025-12-06 03:08:18.085707558 +0000 UTC m=+151.208315130" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.088590 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l77v9"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.114065 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.114565 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.614545261 +0000 UTC m=+151.737152823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.217014 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-utilities\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.217071 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrqh\" (UniqueName: \"kubernetes.io/projected/98beccef-be81-4934-b000-a41b741ed810-kube-api-access-gjrqh\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.217143 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.217178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-catalog-content\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.217554 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.717528976 +0000 UTC m=+151.840136548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.249221 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fn52d"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.250371 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.261725 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.284894 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fn52d"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.318083 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.318317 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.818288868 +0000 UTC m=+151.940896440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.318375 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-utilities\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.318445 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjrqh\" (UniqueName: \"kubernetes.io/projected/98beccef-be81-4934-b000-a41b741ed810-kube-api-access-gjrqh\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.318650 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.318681 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-catalog-content\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.318971 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-utilities\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.319169 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.819150063 +0000 UTC m=+151.941757625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.319207 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-catalog-content\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.360091 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjrqh\" (UniqueName: \"kubernetes.io/projected/98beccef-be81-4934-b000-a41b741ed810-kube-api-access-gjrqh\") pod \"community-operators-l77v9\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.380807 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" event={"ID":"48133237-eb56-4344-8fb4-8e61ce32bf37","Type":"ContainerStarted","Data":"469b81a57efcfe247725587e3c3c3b9fb52d9dcec12a1f0288f04e73e12f8ffd"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.385618 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-b46nz" event={"ID":"fefbdaaf-1e48-4731-93e6-285fff94b582","Type":"ContainerStarted","Data":"a1d8b0ea2228507de92171635556f7d0be0e8ce047c4d68962881c41139fdcd7"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.386990 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" event={"ID":"630057a4-ba0a-485b-8ac1-0113c42a9fe5","Type":"ContainerStarted","Data":"57258c0fda32402a7ebb53442871e84392e38f23597f494fa267739d36a616b9"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.388264 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" event={"ID":"974d36e1-ff64-4ad8-9bd9-0efef426c97d","Type":"ContainerStarted","Data":"d34c92d3efa8707b781001c30bf17cddb3c8872e440e91f7e2e3745dcdfd7b65"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.389474 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" event={"ID":"8b40f713-c236-4a45-8368-be3bb94cd428","Type":"ContainerStarted","Data":"0941c0b86b8ac6b04deef76445bd70dfb5d94dca322b20b46bdc262c722b52a3"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.390576 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" event={"ID":"cdeebf50-e2da-438a-b872-64c4a8d43d6e","Type":"ContainerStarted","Data":"3497a376f6b03c65486169347b016531d26b0bd1fc3b0ea0f4729df1eff6f27b"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.392071 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"098a625240ad74a51052fbda16d3c14fcbb15bdd8be176b78c16a384543e1e4e"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.393249 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" event={"ID":"5525eaf6-8a16-42f6-af0c-b2188a09fb5a","Type":"ContainerStarted","Data":"209c2177dd28a499cc8c4035c6e19df7bfecfab20c2cedee07041bd95864c1f3"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.394564 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" event={"ID":"929cf2ac-1dab-4c49-89ac-243e45f24493","Type":"ContainerStarted","Data":"660b71a0fcf103b003ac406296f31114f3641ffac29d4ff9b868e4e2987b587f"} Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.396007 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.400976 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-l87sx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.401041 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l87sx" podUID="e3827827-d4d4-4506-8318-6867da12c067" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.413403 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.418549 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:18 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:18 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:18 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.418621 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.420058 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.420292 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-catalog-content\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.420381 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.920315876 +0000 UTC m=+152.042923448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.420456 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.420606 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qc7\" (UniqueName: \"kubernetes.io/projected/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-kube-api-access-c6qc7\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.420736 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-utilities\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.420799 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-k54lb" podStartSLOduration=129.420787699 podStartE2EDuration="2m9.420787699s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.418441165 +0000 UTC m=+151.541048747" watchObservedRunningTime="2025-12-06 03:08:18.420787699 +0000 UTC m=+151.543395271" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.421335 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:18.921310244 +0000 UTC m=+152.043917816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.460660 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-htf5h"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.461619 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.490342 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htf5h"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.521740 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.522165 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6qc7\" (UniqueName: \"kubernetes.io/projected/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-kube-api-access-c6qc7\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.522401 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-utilities\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.522634 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-catalog-content\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.524025 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.023993549 +0000 UTC m=+152.146601171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.525634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-utilities\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.534078 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" podStartSLOduration=128.534047746 podStartE2EDuration="2m8.534047746s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.494284362 +0000 UTC m=+151.616891934" watchObservedRunningTime="2025-12-06 03:08:18.534047746 +0000 UTC m=+151.656655318" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.537976 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" podStartSLOduration=129.537946143 podStartE2EDuration="2m9.537946143s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.526013595 +0000 UTC m=+151.648621167" watchObservedRunningTime="2025-12-06 03:08:18.537946143 +0000 UTC m=+151.660553735" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.538131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-catalog-content\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.560520 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6qc7\" (UniqueName: \"kubernetes.io/projected/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-kube-api-access-c6qc7\") pod \"certified-operators-fn52d\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.566132 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.577440 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-d87wj" podStartSLOduration=129.577403799 podStartE2EDuration="2m9.577403799s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.554654903 +0000 UTC m=+151.677262485" watchObservedRunningTime="2025-12-06 03:08:18.577403799 +0000 UTC m=+151.700011371" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.625479 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2qms\" (UniqueName: \"kubernetes.io/projected/83259a75-730d-4f15-8a2f-d8be13ec335a-kube-api-access-b2qms\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.625555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.625597 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-utilities\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.625621 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-catalog-content\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.625964 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.125952755 +0000 UTC m=+152.248560317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.650000 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xmsxh" podStartSLOduration=129.649958616 podStartE2EDuration="2m9.649958616s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.633181824 +0000 UTC m=+151.755789396" watchObservedRunningTime="2025-12-06 03:08:18.649958616 +0000 UTC m=+151.772566188" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.653915 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fv89x"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.655189 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.673932 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7pjv6" podStartSLOduration=129.673907095 podStartE2EDuration="2m9.673907095s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.672678151 +0000 UTC m=+151.795285733" watchObservedRunningTime="2025-12-06 03:08:18.673907095 +0000 UTC m=+151.796514667" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.675944 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fv89x"] Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.728361 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.728799 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2qms\" (UniqueName: \"kubernetes.io/projected/83259a75-730d-4f15-8a2f-d8be13ec335a-kube-api-access-b2qms\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.728867 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-utilities\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.728897 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-catalog-content\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.729847 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-catalog-content\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.729944 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.229921766 +0000 UTC m=+152.352529328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.730506 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-utilities\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.763691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2qms\" (UniqueName: \"kubernetes.io/projected/83259a75-730d-4f15-8a2f-d8be13ec335a-kube-api-access-b2qms\") pod \"community-operators-htf5h\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.764830 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-dwczw" podStartSLOduration=128.764805416 podStartE2EDuration="2m8.764805416s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.764323374 +0000 UTC m=+151.886930936" watchObservedRunningTime="2025-12-06 03:08:18.764805416 +0000 UTC m=+151.887412988" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.789286 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" podStartSLOduration=129.789256859 podStartE2EDuration="2m9.789256859s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.783162451 +0000 UTC m=+151.905770023" watchObservedRunningTime="2025-12-06 03:08:18.789256859 +0000 UTC m=+151.911864431" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.811404 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" podStartSLOduration=128.811386488 podStartE2EDuration="2m8.811386488s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.806050601 +0000 UTC m=+151.928658173" watchObservedRunningTime="2025-12-06 03:08:18.811386488 +0000 UTC m=+151.933994060" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.835464 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-catalog-content\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.835511 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-utilities\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.835539 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnhk9\" (UniqueName: \"kubernetes.io/projected/2251dd16-904f-4bf6-aac8-3a82a0778689-kube-api-access-qnhk9\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.835594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.837039 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.337022213 +0000 UTC m=+152.459629785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.938946 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.939812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-catalog-content\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.939861 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-utilities\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.939899 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnhk9\" (UniqueName: \"kubernetes.io/projected/2251dd16-904f-4bf6-aac8-3a82a0778689-kube-api-access-qnhk9\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: E1206 03:08:18.940286 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.440263994 +0000 UTC m=+152.562871576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.941164 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-catalog-content\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.941457 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-utilities\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:18 crc kubenswrapper[4801]: I1206 03:08:18.961343 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k47rq" podStartSLOduration=128.961326854 podStartE2EDuration="2m8.961326854s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:18.960711028 +0000 UTC m=+152.083318600" watchObservedRunningTime="2025-12-06 03:08:18.961326854 +0000 UTC m=+152.083934426" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.026177 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnhk9\" (UniqueName: \"kubernetes.io/projected/2251dd16-904f-4bf6-aac8-3a82a0778689-kube-api-access-qnhk9\") pod \"certified-operators-fv89x\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.080660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.081032 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.581019609 +0000 UTC m=+152.703627181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.182843 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.183202 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.68318554 +0000 UTC m=+152.805793112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.183438 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.183746 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.683740065 +0000 UTC m=+152.806347637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.216111 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.216369 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.221830 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.256302 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fn52d"] Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.284410 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.284646 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.78460405 +0000 UTC m=+152.907211622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.284715 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.285530 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.785522376 +0000 UTC m=+152.908129948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.297227 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.385703 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.386065 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.886048753 +0000 UTC m=+153.008656325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.409656 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" event={"ID":"8cd59dba-6eb2-498f-b659-f4710a2da4b4","Type":"ContainerStarted","Data":"b0f078962b0d9c98ae6227d497f509971c1158e27c7864c7f6463d9c35063944"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.412797 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" event={"ID":"d5e2010c-d755-4f50-b5de-799ab1c30e5a","Type":"ContainerStarted","Data":"e97ee2e940f32ec1d7ccaf69f04b7654b1757965bb22197c06f5c46f8bf71429"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.422629 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:19 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:19 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:19 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.423195 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.433789 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a715e800e11234317ef00b2ee37df8ac495965156853d62ee0823a9f0cbbb911"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.491864 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" event={"ID":"ac114e18-3e28-463f-ad3c-38ae077fdac1","Type":"ContainerStarted","Data":"60d162db0acd5e3cf1efbb80a88050e6107d531a725586b9bce309c4dc8ac143"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.494355 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.494713 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:19.994699542 +0000 UTC m=+153.117307114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.495565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" event={"ID":"1266164c-6204-478a-9d2b-7f4a54cd42fa","Type":"ContainerStarted","Data":"49bae33f845749e1c48e49d336faeaca7f1e510b9af17542f7b8c43aef6c9a35"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.500203 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerStarted","Data":"d2e852399c8f20b9a96598f78b905615a47a2d6984920e68966ca26c2f4862c3"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.501691 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" event={"ID":"4c483458-0e51-4a45-86bc-df13cc609b9d","Type":"ContainerStarted","Data":"7e848781a10cb1afb8757fc23aa5948f9fc3cc360054adaed4226b5c7641519f"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.502990 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" event={"ID":"32975314-a63c-4c90-a5f6-6bee14a860c8","Type":"ContainerStarted","Data":"d6c3728abb8b4cd153e6b99638aa3e7d44a037838d151eda0d1994f1bb26c94c"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.504898 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" event={"ID":"e80f1b1d-bd4e-4890-88eb-daf951411754","Type":"ContainerStarted","Data":"8baa2e7e28a5d229f4bcd489746a7502f29a262eafb15bb71985abbe89e061a6"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.507162 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" event={"ID":"6c18f03b-59b4-4759-ae52-198497bc084d","Type":"ContainerStarted","Data":"23cd9ec0d70c19ea6519c5942194b007cb6ba16a5172d784fb5526caf0500c5f"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.513877 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" event={"ID":"dba786e6-e56c-4818-ab34-9c6ae4ab5a6c","Type":"ContainerStarted","Data":"023e04e68f0f23663557b6d5b77100359e8c34dcddcf5119987d0283098dab17"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.515977 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" event={"ID":"c2182271-6931-405d-b230-a47f12606828","Type":"ContainerStarted","Data":"dbbdc2075b030452d4fcc8ea08b8bbe7aedab4e767b717e449e890c86a99c72f"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.518820 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" event={"ID":"c4257778-cdcb-4430-beb7-a47766082129","Type":"ContainerStarted","Data":"cf5901197a4977dc4b00b3e526b9fe84150923f2b0a5bd59166ddb7f2447d8cb"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.521010 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-xlgtc" podStartSLOduration=129.520988206 podStartE2EDuration="2m9.520988206s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.518182249 +0000 UTC m=+152.640789821" watchObservedRunningTime="2025-12-06 03:08:19.520988206 +0000 UTC m=+152.643595778" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.521437 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7679s" event={"ID":"7fdd24fe-710e-4452-a48a-1d59910c78e3","Type":"ContainerStarted","Data":"83f1411d2d6b0064935a3a6fea1791a0ec7057c99eb78d92fa1ed0e78342f921"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.522639 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-f492b" event={"ID":"c5f37838-e5ab-461e-833e-d07b0bf13cf3","Type":"ContainerStarted","Data":"538688962b7b609454b9d4f9df0ca6cc34fd95baa184df333b7784f0577f488c"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.524431 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-psxlr" event={"ID":"a0904103-6105-41fd-b158-2f8a5a99b773","Type":"ContainerStarted","Data":"5ea9187dade64235e9bf2105fb2b44a5ccd58aca17728f3b06e4b2dc6cd99409"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.527823 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" event={"ID":"70437be2-9089-427f-8daa-22a299ed14b8","Type":"ContainerStarted","Data":"4875b73b3cd5656629bba8a12095a87cef5c8acdeb9837b9b18cf382ed706b4f"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.533741 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" event={"ID":"d58c5185-9cfb-4e5f-956e-d12e12b5e81e","Type":"ContainerStarted","Data":"f80ad42a01a4f0c0b0716090808e79fbc4209366e8b034c66ad84e5ea17c7990"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.538322 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" event={"ID":"e83cc8aa-bb9f-4ad4-9f1d-9be6dc5ebb07","Type":"ContainerStarted","Data":"d81644d0132cc45a9e5c82746073bf3826d4f702805d6e34df9e1e01df2002c9"} Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.538407 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.543585 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.545011 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-l87sx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.545097 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l87sx" podUID="e3827827-d4d4-4506-8318-6867da12c067" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.545428 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-p8cpw" podStartSLOduration=129.545404767 podStartE2EDuration="2m9.545404767s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.538738885 +0000 UTC m=+152.661346457" watchObservedRunningTime="2025-12-06 03:08:19.545404767 +0000 UTC m=+152.668012339" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.557481 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" podStartSLOduration=129.557456799 podStartE2EDuration="2m9.557456799s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.556911575 +0000 UTC m=+152.679519147" watchObservedRunningTime="2025-12-06 03:08:19.557456799 +0000 UTC m=+152.680064371" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.604475 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.605937 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.105903463 +0000 UTC m=+153.228511035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.612151 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.617386 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.117368618 +0000 UTC m=+153.239976190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.640475 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" podStartSLOduration=129.640448643 podStartE2EDuration="2m9.640448643s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.575452355 +0000 UTC m=+152.698059917" watchObservedRunningTime="2025-12-06 03:08:19.640448643 +0000 UTC m=+152.763056215" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.648288 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zst5x" podStartSLOduration=129.648267879 podStartE2EDuration="2m9.648267879s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.643129177 +0000 UTC m=+152.765736749" watchObservedRunningTime="2025-12-06 03:08:19.648267879 +0000 UTC m=+152.770875451" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.717221 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.717585 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.217565515 +0000 UTC m=+153.340173087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.730315 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-b46nz" podStartSLOduration=11.730296825 podStartE2EDuration="11.730296825s" podCreationTimestamp="2025-12-06 03:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.685374459 +0000 UTC m=+152.807982031" watchObservedRunningTime="2025-12-06 03:08:19.730296825 +0000 UTC m=+152.852904397" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.745035 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qnr4c" podStartSLOduration=130.745007411 podStartE2EDuration="2m10.745007411s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.727281993 +0000 UTC m=+152.849889565" watchObservedRunningTime="2025-12-06 03:08:19.745007411 +0000 UTC m=+152.867614983" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.770231 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2lngl" podStartSLOduration=129.770105432 podStartE2EDuration="2m9.770105432s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.766345008 +0000 UTC m=+152.888952580" watchObservedRunningTime="2025-12-06 03:08:19.770105432 +0000 UTC m=+152.892713004" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.802531 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-htf5h"] Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.817022 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9qvs7" podStartSLOduration=129.817002262 podStartE2EDuration="2m9.817002262s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.814804992 +0000 UTC m=+152.937412574" watchObservedRunningTime="2025-12-06 03:08:19.817002262 +0000 UTC m=+152.939609834" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.822226 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.822740 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.322720359 +0000 UTC m=+153.445327931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.852542 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l77v9"] Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.889697 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" podStartSLOduration=130.889677392 podStartE2EDuration="2m10.889677392s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:19.868219612 +0000 UTC m=+152.990827184" watchObservedRunningTime="2025-12-06 03:08:19.889677392 +0000 UTC m=+153.012284964" Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.908953 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fv89x"] Dec 06 03:08:19 crc kubenswrapper[4801]: I1206 03:08:19.924129 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:19 crc kubenswrapper[4801]: E1206 03:08:19.924594 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.424574322 +0000 UTC m=+153.547181894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.028505 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.030190 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.530173149 +0000 UTC m=+153.652780721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.075580 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ffnmp"] Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.077635 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.086184 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.097231 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.109962 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffnmp"] Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.133771 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.134344 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsgn\" (UniqueName: \"kubernetes.io/projected/b9bf536e-ce23-42dc-bbaa-69626ccf959f-kube-api-access-9xsgn\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.134492 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-catalog-content\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.134533 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-utilities\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.134707 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.634690114 +0000 UTC m=+153.757297686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.219944 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.236183 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.236227 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-catalog-content\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.236267 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-utilities\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.236353 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xsgn\" (UniqueName: \"kubernetes.io/projected/b9bf536e-ce23-42dc-bbaa-69626ccf959f-kube-api-access-9xsgn\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.238624 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.738601535 +0000 UTC m=+153.861209107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.239964 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-catalog-content\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.240032 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-utilities\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.273658 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.290722 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xsgn\" (UniqueName: \"kubernetes.io/projected/b9bf536e-ce23-42dc-bbaa-69626ccf959f-kube-api-access-9xsgn\") pod \"redhat-marketplace-ffnmp\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.337915 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.338313 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.838293648 +0000 UTC m=+153.960901220 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.425375 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:20 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:20 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:20 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.425453 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.439848 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.440283 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:20.940261494 +0000 UTC m=+154.062869066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.443411 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.462832 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qfbpn"] Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.463908 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.480085 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfbpn"] Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.537603 4801 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nw9fc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.537671 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" podUID="cdeebf50-e2da-438a-b872-64c4a8d43d6e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.541206 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.541479 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.041444729 +0000 UTC m=+154.164052301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.541842 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sxtj\" (UniqueName: \"kubernetes.io/projected/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-kube-api-access-7sxtj\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.542056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.542197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-utilities\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.542341 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-catalog-content\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.542834 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.042815166 +0000 UTC m=+154.165422738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.566041 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fv89x" event={"ID":"2251dd16-904f-4bf6-aac8-3a82a0778689","Type":"ContainerStarted","Data":"348d8fd3637dfae59795914a12ccf72b5ca97637191c0c7f73ad8e3fccfe7c8b"} Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.568626 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerStarted","Data":"7de9736277473e85c6ab3559c215e6b14d98aabe7c817d632b86ba0376829d7c"} Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.569981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerStarted","Data":"26479017464c1c1dcfe4cac1f3a24f3cdd9e773d614b07a2843359fcc04016a0"} Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.571110 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerStarted","Data":"1e9738e65fc5f6bfeedb01283f80efb062afdf5d9f5732b82dc8d8010ef4b12d"} Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.594084 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7679s" podStartSLOduration=131.594059666 podStartE2EDuration="2m11.594059666s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.59239131 +0000 UTC m=+153.714998892" watchObservedRunningTime="2025-12-06 03:08:20.594059666 +0000 UTC m=+153.716667228" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.643391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.643816 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-utilities\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.644032 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-catalog-content\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.644303 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sxtj\" (UniqueName: \"kubernetes.io/projected/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-kube-api-access-7sxtj\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.645825 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.14579757 +0000 UTC m=+154.268405142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.646794 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-utilities\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.648097 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-catalog-content\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.668262 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-d8j6n" podStartSLOduration=130.668238447 podStartE2EDuration="2m10.668238447s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.66321778 +0000 UTC m=+153.785825362" watchObservedRunningTime="2025-12-06 03:08:20.668238447 +0000 UTC m=+153.790846019" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.670226 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-smn8v" podStartSLOduration=131.670211292 podStartE2EDuration="2m11.670211292s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.629589874 +0000 UTC m=+153.752197456" watchObservedRunningTime="2025-12-06 03:08:20.670211292 +0000 UTC m=+153.792818874" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.699655 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sxtj\" (UniqueName: \"kubernetes.io/projected/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-kube-api-access-7sxtj\") pod \"redhat-marketplace-qfbpn\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.704542 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-fws9v" podStartSLOduration=130.704518976 podStartE2EDuration="2m10.704518976s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.691339663 +0000 UTC m=+153.813947235" watchObservedRunningTime="2025-12-06 03:08:20.704518976 +0000 UTC m=+153.827126548" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.747441 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.747835 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.247820827 +0000 UTC m=+154.370428399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.788257 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.807330 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-f492b" podStartSLOduration=131.807307595 podStartE2EDuration="2m11.807307595s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.768374033 +0000 UTC m=+153.890981615" watchObservedRunningTime="2025-12-06 03:08:20.807307595 +0000 UTC m=+153.929915167" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.837214 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" podStartSLOduration=131.837192607 podStartE2EDuration="2m11.837192607s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.807233873 +0000 UTC m=+153.929841445" watchObservedRunningTime="2025-12-06 03:08:20.837192607 +0000 UTC m=+153.959800179" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.849967 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.850312 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.350296367 +0000 UTC m=+154.472903939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.876748 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" podStartSLOduration=130.876724265 podStartE2EDuration="2m10.876724265s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.837126335 +0000 UTC m=+153.959733907" watchObservedRunningTime="2025-12-06 03:08:20.876724265 +0000 UTC m=+153.999331837" Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.958435 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:20 crc kubenswrapper[4801]: E1206 03:08:20.958946 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.458927037 +0000 UTC m=+154.581534609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:20 crc kubenswrapper[4801]: I1206 03:08:20.959354 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zsvsf" podStartSLOduration=130.959334738 podStartE2EDuration="2m10.959334738s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:20.949981381 +0000 UTC m=+154.072588953" watchObservedRunningTime="2025-12-06 03:08:20.959334738 +0000 UTC m=+154.081942310" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.063833 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.064101 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.56406607 +0000 UTC m=+154.686673652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.064584 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.064983 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.564970566 +0000 UTC m=+154.687578128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.141967 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffnmp"] Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.170291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.173964 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.673924994 +0000 UTC m=+154.796532566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.257258 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8t2r5"] Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.267131 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.276653 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.277013 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.77699917 +0000 UTC m=+154.899606742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.283448 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.300254 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8t2r5"] Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.377840 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.378080 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-catalog-content\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.378120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-utilities\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.378139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2rp2\" (UniqueName: \"kubernetes.io/projected/73474c40-4e21-4384-94be-94d4015e7668-kube-api-access-n2rp2\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.378323 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.878308009 +0000 UTC m=+155.000915581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.386431 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfbpn"] Dec 06 03:08:21 crc kubenswrapper[4801]: W1206 03:08:21.426002 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9bf536e_ce23_42dc_bbaa_69626ccf959f.slice/crio-1e25fe92a48318010c4102cda4e65f23e0eb1fe390196e90086a0f303dfe4331 WatchSource:0}: Error finding container 1e25fe92a48318010c4102cda4e65f23e0eb1fe390196e90086a0f303dfe4331: Status 404 returned error can't find the container with id 1e25fe92a48318010c4102cda4e65f23e0eb1fe390196e90086a0f303dfe4331 Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.436380 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:21 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:21 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:21 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.436431 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.480583 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-catalog-content\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.481401 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-utilities\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.481423 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2rp2\" (UniqueName: \"kubernetes.io/projected/73474c40-4e21-4384-94be-94d4015e7668-kube-api-access-n2rp2\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.481452 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.481823 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:21.981810116 +0000 UTC m=+155.104417688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.481046 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nw9fc" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.482343 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-utilities\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.482508 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-catalog-content\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.547487 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2rp2\" (UniqueName: \"kubernetes.io/projected/73474c40-4e21-4384-94be-94d4015e7668-kube-api-access-n2rp2\") pod \"redhat-operators-8t2r5\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.587122 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.587226 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.087209417 +0000 UTC m=+155.209816989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.587353 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.587677 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.08767002 +0000 UTC m=+155.210277592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.621812 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" event={"ID":"929cf2ac-1dab-4c49-89ac-243e45f24493","Type":"ContainerStarted","Data":"1371a4f2091ec441638aa380456919b42a30eb527ee1bc4484e96edf58537b58"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.634697 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.639487 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" event={"ID":"c4257778-cdcb-4430-beb7-a47766082129","Type":"ContainerStarted","Data":"d8e6c3f15a2ee09732d4910158466d3a56e053439bd97a6e6b22062f58bbde63"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.650285 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfbpn" event={"ID":"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989","Type":"ContainerStarted","Data":"df25a277338087fd6eeac14bdd8c64d2570ee1041b4039ada8615739bb211d14"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.655802 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" event={"ID":"8cd59dba-6eb2-498f-b659-f4710a2da4b4","Type":"ContainerStarted","Data":"5498ca270a174ef304e4c39a41823bec8de4d0d83e69afd41fdf39994229aed7"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.663083 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvjj9"] Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.664325 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.684419 4801 generic.go:334] "Generic (PLEG): container finished" podID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerID="26479017464c1c1dcfe4cac1f3a24f3cdd9e773d614b07a2843359fcc04016a0" exitCode=0 Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.684503 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerDied","Data":"26479017464c1c1dcfe4cac1f3a24f3cdd9e773d614b07a2843359fcc04016a0"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.688191 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvjj9"] Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.688365 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.688526 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-utilities\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.688551 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-catalog-content\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.694743 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.194718035 +0000 UTC m=+155.317325607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.705839 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.705987 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pncc\" (UniqueName: \"kubernetes.io/projected/aec29137-ee19-4a21-85d3-4efbf7cf342b-kube-api-access-4pncc\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.706370 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.206358156 +0000 UTC m=+155.328965728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.717465 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.751744 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerStarted","Data":"f62517087778f1e164d26675f9b13bb7e29453e186ccfc9d4384321593ad5c48"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.765977 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" event={"ID":"a0ad9cbd-c157-4563-9c49-2b2e8dc9a13d","Type":"ContainerStarted","Data":"9be0064ef66b8208ffe0151e5e92a1e56050d214ddab6d4237be9d7830857a0a"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.792683 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" event={"ID":"974d36e1-ff64-4ad8-9bd9-0efef426c97d","Type":"ContainerStarted","Data":"61782a722280309c1ed8e9439d565b64fd6fbb662a3c679c6f88f6843f0d7688"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.795011 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerStarted","Data":"1e25fe92a48318010c4102cda4e65f23e0eb1fe390196e90086a0f303dfe4331"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.798378 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerStarted","Data":"14055be4011b10f39ac770f5c67050acec0682b07d37d11d81156827043027f3"} Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.806814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.807185 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-utilities\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.807215 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-catalog-content\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.807285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pncc\" (UniqueName: \"kubernetes.io/projected/aec29137-ee19-4a21-85d3-4efbf7cf342b-kube-api-access-4pncc\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.807675 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.307658764 +0000 UTC m=+155.430266336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.808110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-utilities\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.808367 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-catalog-content\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.835926 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pncc\" (UniqueName: \"kubernetes.io/projected/aec29137-ee19-4a21-85d3-4efbf7cf342b-kube-api-access-4pncc\") pod \"redhat-operators-pvjj9\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.912622 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:21 crc kubenswrapper[4801]: E1206 03:08:21.913072 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.413053204 +0000 UTC m=+155.535660776 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:21 crc kubenswrapper[4801]: I1206 03:08:21.981273 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.014360 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.014894 4801 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p8b96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.014950 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" podUID="d58c5185-9cfb-4e5f-956e-d12e12b5e81e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.015225 4801 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p8b96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.015248 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" podUID="d58c5185-9cfb-4e5f-956e-d12e12b5e81e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.015569 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.515548185 +0000 UTC m=+155.638155787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.017884 4801 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p8b96 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.017921 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" podUID="d58c5185-9cfb-4e5f-956e-d12e12b5e81e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.042288 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.063418 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8t2r5"] Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.119645 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.120140 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.620121472 +0000 UTC m=+155.742729044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: W1206 03:08:22.132287 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73474c40_4e21_4384_94be_94d4015e7668.slice/crio-2dc2d419a7ac3098e64a6399e62ed15a9f9c93f6d0a152961eb5d5fe2dd626aa WatchSource:0}: Error finding container 2dc2d419a7ac3098e64a6399e62ed15a9f9c93f6d0a152961eb5d5fe2dd626aa: Status 404 returned error can't find the container with id 2dc2d419a7ac3098e64a6399e62ed15a9f9c93f6d0a152961eb5d5fe2dd626aa Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.220585 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.220939 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.720923096 +0000 UTC m=+155.843530668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.323883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.324368 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.824344413 +0000 UTC m=+155.946951985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.423423 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:22 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:22 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:22 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.423498 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.424977 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.425150 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.925128466 +0000 UTC m=+156.047736038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.425194 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.425577 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:22.925569078 +0000 UTC m=+156.048176650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.527310 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.527701 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.027685178 +0000 UTC m=+156.150292750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.629532 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.630091 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.130069387 +0000 UTC m=+156.252676959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.726647 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvjj9"] Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.730674 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.731140 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.231123848 +0000 UTC m=+156.353731420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: W1206 03:08:22.740539 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaec29137_ee19_4a21_85d3_4efbf7cf342b.slice/crio-fff3fbe8e40ef3b86024c3d37c1f6663f83b8a1f54786d42d9eaf969af71aea9 WatchSource:0}: Error finding container fff3fbe8e40ef3b86024c3d37c1f6663f83b8a1f54786d42d9eaf969af71aea9: Status 404 returned error can't find the container with id fff3fbe8e40ef3b86024c3d37c1f6663f83b8a1f54786d42d9eaf969af71aea9 Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.816278 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerStarted","Data":"2dc2d419a7ac3098e64a6399e62ed15a9f9c93f6d0a152961eb5d5fe2dd626aa"} Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.818403 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerStarted","Data":"fff3fbe8e40ef3b86024c3d37c1f6663f83b8a1f54786d42d9eaf969af71aea9"} Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.822736 4801 generic.go:334] "Generic (PLEG): container finished" podID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerID="f62517087778f1e164d26675f9b13bb7e29453e186ccfc9d4384321593ad5c48" exitCode=0 Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.822807 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerDied","Data":"f62517087778f1e164d26675f9b13bb7e29453e186ccfc9d4384321593ad5c48"} Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.828074 4801 generic.go:334] "Generic (PLEG): container finished" podID="98beccef-be81-4934-b000-a41b741ed810" containerID="14055be4011b10f39ac770f5c67050acec0682b07d37d11d81156827043027f3" exitCode=0 Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.828127 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerDied","Data":"14055be4011b10f39ac770f5c67050acec0682b07d37d11d81156827043027f3"} Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.832547 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.832956 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.332941549 +0000 UTC m=+156.455549121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.933246 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.933605 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.433575978 +0000 UTC m=+156.556183550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:22 crc kubenswrapper[4801]: I1206 03:08:22.933940 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:22 crc kubenswrapper[4801]: E1206 03:08:22.934378 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.43436209 +0000 UTC m=+156.556969662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.035588 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.035811 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.535781331 +0000 UTC m=+156.658388903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.035889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.036244 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.536226834 +0000 UTC m=+156.658834406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.137312 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.137489 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.6374612 +0000 UTC m=+156.760068772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.137664 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.138023 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.638016064 +0000 UTC m=+156.760623636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.238697 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.238917 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.73888781 +0000 UTC m=+156.861495372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.239012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.239408 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.739396055 +0000 UTC m=+156.862003627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.245511 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.245925 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.256113 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mxn96" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.290881 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.290935 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.292353 4801 patch_prober.go:28] interesting pod/console-f9d7485db-qnr4c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.292413 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qnr4c" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.315823 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.331167 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-l87sx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.331220 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-l87sx" podUID="e3827827-d4d4-4506-8318-6867da12c067" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.332347 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-f492b" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.332961 4801 patch_prober.go:28] interesting pod/downloads-7954f5f757-l87sx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.333015 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-l87sx" podUID="e3827827-d4d4-4506-8318-6867da12c067" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.338952 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.339994 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.340098 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.840077336 +0000 UTC m=+156.962684908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.340680 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.342328 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.842311627 +0000 UTC m=+156.964919199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.348602 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.413183 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.416102 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:23 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:23 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:23 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.416157 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.442134 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.442260 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.942239627 +0000 UTC m=+157.064847199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.444740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.445129 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:23.945112626 +0000 UTC m=+157.067720198 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.546258 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.546400 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.046372533 +0000 UTC m=+157.168980105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.546842 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.547565 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.047555935 +0000 UTC m=+157.170163507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.648219 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.648428 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.14839721 +0000 UTC m=+157.271004772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.648504 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.648928 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.148912114 +0000 UTC m=+157.271519686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.749381 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.749776 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.249743339 +0000 UTC m=+157.372350911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.783182 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.788372 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.827877 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.835025 4801 generic.go:334] "Generic (PLEG): container finished" podID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerID="fc5849d161866678161798822c74c98d3cbbe7df253b953e29a875564abd6d00" exitCode=0 Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.835053 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fv89x" event={"ID":"2251dd16-904f-4bf6-aac8-3a82a0778689","Type":"ContainerDied","Data":"fc5849d161866678161798822c74c98d3cbbe7df253b953e29a875564abd6d00"} Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.837924 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-gckck" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.852717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.854071 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.35405654 +0000 UTC m=+157.476664102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.860606 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bb2lf" podStartSLOduration=133.860578299 podStartE2EDuration="2m13.860578299s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:23.85951623 +0000 UTC m=+156.982123812" watchObservedRunningTime="2025-12-06 03:08:23.860578299 +0000 UTC m=+156.983185871" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.913407 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" podStartSLOduration=133.913388012 podStartE2EDuration="2m13.913388012s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:23.892860747 +0000 UTC m=+157.015468329" watchObservedRunningTime="2025-12-06 03:08:23.913388012 +0000 UTC m=+157.035995594" Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.953581 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.953730 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.453709622 +0000 UTC m=+157.576317194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:23 crc kubenswrapper[4801]: I1206 03:08:23.954013 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:23 crc kubenswrapper[4801]: E1206 03:08:23.954998 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.454978447 +0000 UTC m=+157.577586009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.055479 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.055708 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.555650788 +0000 UTC m=+157.678258370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.055828 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.056129 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.55611761 +0000 UTC m=+157.678725172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.156598 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.156961 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.656910384 +0000 UTC m=+157.779518016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.157391 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.157916 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.657892491 +0000 UTC m=+157.780500093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.259475 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.260505 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.760478214 +0000 UTC m=+157.883085786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.362179 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.362698 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.862675097 +0000 UTC m=+157.985282679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.415903 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:24 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:24 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:24 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.416350 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.462946 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.463205 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.963174863 +0000 UTC m=+158.085782445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.463239 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.463646 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:24.963634065 +0000 UTC m=+158.086241717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.564420 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.564616 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.064583453 +0000 UTC m=+158.187191035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.564810 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.565168 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.065152579 +0000 UTC m=+158.187760151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.666693 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.666887 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.166860307 +0000 UTC m=+158.289467879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.667010 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.667376 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.167368072 +0000 UTC m=+158.289975644 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.767731 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.767932 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.267898248 +0000 UTC m=+158.390505820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.768069 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.768375 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.268365301 +0000 UTC m=+158.390972963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.868953 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.869103 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.369074763 +0000 UTC m=+158.491682325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.869270 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.869636 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.369628468 +0000 UTC m=+158.492236040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.905003 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.905077 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.913052 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.970452 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.970613 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.470583866 +0000 UTC m=+158.593191438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.970790 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:24 crc kubenswrapper[4801]: E1206 03:08:24.971532 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.471521232 +0000 UTC m=+158.594128804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:24 crc kubenswrapper[4801]: I1206 03:08:24.985527 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.072242 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.072440 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.572407348 +0000 UTC m=+158.695014920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.072620 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.073276 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.573247961 +0000 UTC m=+158.695855553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.126013 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.126876 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.129112 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.129366 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.134273 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.174042 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.174278 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.674252101 +0000 UTC m=+158.796859673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.174347 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.174420 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.174442 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.174820 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.674807126 +0000 UTC m=+158.797414698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.275374 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.275705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.275739 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.275958 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.276164 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.776134104 +0000 UTC m=+158.898741676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.295481 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.377161 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.377677 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.877650528 +0000 UTC m=+159.000258140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.417413 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:25 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:25 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:25 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.417505 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.457301 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.478846 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.479014 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.978988707 +0000 UTC m=+159.101596289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.479454 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.479883 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:25.979871591 +0000 UTC m=+159.102479163 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.580863 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.581069 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.081043735 +0000 UTC m=+159.203651307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.581280 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.581603 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.081589961 +0000 UTC m=+159.204197533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.663468 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.682160 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.682305 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.182278141 +0000 UTC m=+159.304885713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.682467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.682897 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.182880558 +0000 UTC m=+159.305488140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.783739 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.784093 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.284051022 +0000 UTC m=+159.406658624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.784161 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.784618 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.284599118 +0000 UTC m=+159.407206710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.850613 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c","Type":"ContainerStarted","Data":"f42bfaee2d691a97a1f2f476635575e378db2275c2e3574f339f8f7793c34f35"} Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.856700 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x65wm" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.862516 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rhb7b" podStartSLOduration=135.862493361 podStartE2EDuration="2m15.862493361s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:25.86171565 +0000 UTC m=+158.984323222" watchObservedRunningTime="2025-12-06 03:08:25.862493361 +0000 UTC m=+158.985100933" Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.884971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.885370 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.385351579 +0000 UTC m=+159.507959161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:25 crc kubenswrapper[4801]: I1206 03:08:25.986942 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:25 crc kubenswrapper[4801]: E1206 03:08:25.988055 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.488040736 +0000 UTC m=+159.610648308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.089241 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.089419 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.589391935 +0000 UTC m=+159.711999507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.089845 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.090180 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.590170397 +0000 UTC m=+159.712777959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.191010 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.191103 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.691074073 +0000 UTC m=+159.813681645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.191216 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.191518 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.691508395 +0000 UTC m=+159.814115967 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.292657 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.293100 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.793072 +0000 UTC m=+159.915679572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.394983 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.395443 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.895426177 +0000 UTC m=+160.018033749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.417387 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:26 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:26 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:26 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.417480 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.496315 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.496703 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:26.996687753 +0000 UTC m=+160.119295325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.598134 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.598589 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.098568547 +0000 UTC m=+160.221176119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.699343 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.699670 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.199603788 +0000 UTC m=+160.322211390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.700379 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.700785 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.2007683 +0000 UTC m=+160.323375872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.802331 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.802573 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.302542081 +0000 UTC m=+160.425149653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.803201 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.803555 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.303544638 +0000 UTC m=+160.426152210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.857152 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerStarted","Data":"5c557f7942f387cdc0b78c3d73ed7916c266f478e1ce8e034b20dd08fe602434"} Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.898616 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" podStartSLOduration=136.898589573 podStartE2EDuration="2m16.898589573s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:26.895830687 +0000 UTC m=+160.018438259" watchObservedRunningTime="2025-12-06 03:08:26.898589573 +0000 UTC m=+160.021197145" Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.898985 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5vtr4" podStartSLOduration=136.898978764 podStartE2EDuration="2m16.898978764s" podCreationTimestamp="2025-12-06 03:06:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:26.874306025 +0000 UTC m=+159.996913597" watchObservedRunningTime="2025-12-06 03:08:26.898978764 +0000 UTC m=+160.021586336" Dec 06 03:08:26 crc kubenswrapper[4801]: I1206 03:08:26.904466 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:26 crc kubenswrapper[4801]: E1206 03:08:26.905413 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.405384171 +0000 UTC m=+160.527991743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.006892 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.007306 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.507290825 +0000 UTC m=+160.629898397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.108931 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.109071 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.609037934 +0000 UTC m=+160.731645516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.109391 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.109850 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.609839337 +0000 UTC m=+160.732446919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.211523 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.211831 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.711791463 +0000 UTC m=+160.834399055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.212163 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.212798 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.712733779 +0000 UTC m=+160.835341391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.313052 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.313252 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.813224334 +0000 UTC m=+160.935831906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.313318 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.313680 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.813673117 +0000 UTC m=+160.936280689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.419147 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:27 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:27 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:27 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.419275 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.424553 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.425442 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:27.925410881 +0000 UTC m=+161.048018493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.526083 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.526542 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.026520314 +0000 UTC m=+161.149127966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.627789 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.628222 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.128170451 +0000 UTC m=+161.250778013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.628780 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.629278 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.129257361 +0000 UTC m=+161.251864933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.730374 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.730550 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.230527898 +0000 UTC m=+161.353135490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.730683 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.731027 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.231019402 +0000 UTC m=+161.353626974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.831318 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.831590 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.331557508 +0000 UTC m=+161.454165080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.866113 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" event={"ID":"70437be2-9089-427f-8daa-22a299ed14b8","Type":"ContainerStarted","Data":"06cf8ae23917b83afac47a151797581e39293b7a1decee46aed7cab18919788f"} Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.868457 4801 generic.go:334] "Generic (PLEG): container finished" podID="630057a4-ba0a-485b-8ac1-0113c42a9fe5" containerID="57258c0fda32402a7ebb53442871e84392e38f23597f494fa267739d36a616b9" exitCode=0 Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.868518 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" event={"ID":"630057a4-ba0a-485b-8ac1-0113c42a9fe5","Type":"ContainerDied","Data":"57258c0fda32402a7ebb53442871e84392e38f23597f494fa267739d36a616b9"} Dec 06 03:08:27 crc kubenswrapper[4801]: I1206 03:08:27.933739 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:27 crc kubenswrapper[4801]: E1206 03:08:27.934047 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.434033619 +0000 UTC m=+161.556641191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.039320 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.039593 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.539559692 +0000 UTC m=+161.662167264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.039812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.040173 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.540164219 +0000 UTC m=+161.662771791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.141684 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.141927 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.641898249 +0000 UTC m=+161.764505811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.142091 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.142445 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.642429034 +0000 UTC m=+161.765036606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.248495 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.248694 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.748663746 +0000 UTC m=+161.871271318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.250966 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.251927 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.751911967 +0000 UTC m=+161.874519539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.305336 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4f7d2a4_ea30_4b87_9bbb_cb7b89193989.slice/crio-94fd8316c699e5c7d3ce2241a8d957921b332dd08e292082bb930bb9f2532eb5.scope\": RecentStats: unable to find data in memory cache]" Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.351825 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.352609 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.852587127 +0000 UTC m=+161.975194699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.418530 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:28 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:28 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:28 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.418591 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.453693 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.454239 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:28.954226294 +0000 UTC m=+162.076833856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.554748 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.554854 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.054834882 +0000 UTC m=+162.177442454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.555117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.555439 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.055431738 +0000 UTC m=+162.178039310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.656693 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.656926 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.156883741 +0000 UTC m=+162.279491323 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.657313 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.658017 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.157999672 +0000 UTC m=+162.280607244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.758813 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.758991 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.25896091 +0000 UTC m=+162.381568472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.759084 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.759430 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.259417962 +0000 UTC m=+162.382025534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.860303 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.860536 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.360505524 +0000 UTC m=+162.483113096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.860641 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.860993 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.360979938 +0000 UTC m=+162.483587510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.878997 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" event={"ID":"e80f1b1d-bd4e-4890-88eb-daf951411754","Type":"ContainerStarted","Data":"858d6685455a84f3e59955f1298bd2d3fa23d0aaac1361418f03674e95c5d616"} Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.882084 4801 generic.go:334] "Generic (PLEG): container finished" podID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerID="5c557f7942f387cdc0b78c3d73ed7916c266f478e1ce8e034b20dd08fe602434" exitCode=0 Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.882197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerDied","Data":"5c557f7942f387cdc0b78c3d73ed7916c266f478e1ce8e034b20dd08fe602434"} Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.886997 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" event={"ID":"ac114e18-3e28-463f-ad3c-38ae077fdac1","Type":"ContainerStarted","Data":"8bd8db95838f827b8a529874a87e49141cd343e53547552edfe2c3e58235fa9b"} Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.912943 4801 generic.go:334] "Generic (PLEG): container finished" podID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerID="94fd8316c699e5c7d3ce2241a8d957921b332dd08e292082bb930bb9f2532eb5" exitCode=0 Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.913032 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfbpn" event={"ID":"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989","Type":"ContainerDied","Data":"94fd8316c699e5c7d3ce2241a8d957921b332dd08e292082bb930bb9f2532eb5"} Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.915683 4801 generic.go:334] "Generic (PLEG): container finished" podID="73474c40-4e21-4384-94be-94d4015e7668" containerID="cb8778628fd335a5d6f0b99b5665ae4b4fd1433e051bef4baf62420ad029c917" exitCode=0 Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.915824 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerDied","Data":"cb8778628fd335a5d6f0b99b5665ae4b4fd1433e051bef4baf62420ad029c917"} Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.918153 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-psxlr" event={"ID":"a0904103-6105-41fd-b158-2f8a5a99b773","Type":"ContainerStarted","Data":"d4b9aeeff1b745ebe0224c78853178cbb1db06f7d78ea441a9fb4a8622819d82"} Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.962350 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.962542 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.462520802 +0000 UTC m=+162.585128374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:28 crc kubenswrapper[4801]: I1206 03:08:28.963004 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:28 crc kubenswrapper[4801]: E1206 03:08:28.963337 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.463322814 +0000 UTC m=+162.585930386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.064978 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.065221 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.565172817 +0000 UTC m=+162.687780399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.065422 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.065892 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.565881836 +0000 UTC m=+162.688489418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.166182 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.166438 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.666399183 +0000 UTC m=+162.789006745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.166511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.167340 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.667315818 +0000 UTC m=+162.789923390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.185512 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.268638 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.268696 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/630057a4-ba0a-485b-8ac1-0113c42a9fe5-config-volume\") pod \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.268750 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw9vv\" (UniqueName: \"kubernetes.io/projected/630057a4-ba0a-485b-8ac1-0113c42a9fe5-kube-api-access-fw9vv\") pod \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.268861 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.768817791 +0000 UTC m=+162.891425363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.268932 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/630057a4-ba0a-485b-8ac1-0113c42a9fe5-secret-volume\") pod \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\" (UID: \"630057a4-ba0a-485b-8ac1-0113c42a9fe5\") " Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.269395 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.269925 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/630057a4-ba0a-485b-8ac1-0113c42a9fe5-config-volume" (OuterVolumeSpecName: "config-volume") pod "630057a4-ba0a-485b-8ac1-0113c42a9fe5" (UID: "630057a4-ba0a-485b-8ac1-0113c42a9fe5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.269999 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.769973713 +0000 UTC m=+162.892581305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.282826 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/630057a4-ba0a-485b-8ac1-0113c42a9fe5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "630057a4-ba0a-485b-8ac1-0113c42a9fe5" (UID: "630057a4-ba0a-485b-8ac1-0113c42a9fe5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.283744 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/630057a4-ba0a-485b-8ac1-0113c42a9fe5-kube-api-access-fw9vv" (OuterVolumeSpecName: "kube-api-access-fw9vv") pod "630057a4-ba0a-485b-8ac1-0113c42a9fe5" (UID: "630057a4-ba0a-485b-8ac1-0113c42a9fe5"). InnerVolumeSpecName "kube-api-access-fw9vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.370578 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.370824 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.870793317 +0000 UTC m=+162.993400889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.371004 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.371162 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/630057a4-ba0a-485b-8ac1-0113c42a9fe5-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.371188 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/630057a4-ba0a-485b-8ac1-0113c42a9fe5-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.371202 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw9vv\" (UniqueName: \"kubernetes.io/projected/630057a4-ba0a-485b-8ac1-0113c42a9fe5-kube-api-access-fw9vv\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.371398 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.871388703 +0000 UTC m=+162.993996275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.419986 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:29 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:29 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:29 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.422007 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.439252 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.439649 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="630057a4-ba0a-485b-8ac1-0113c42a9fe5" containerName="collect-profiles" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.439676 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="630057a4-ba0a-485b-8ac1-0113c42a9fe5" containerName="collect-profiles" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.439900 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="630057a4-ba0a-485b-8ac1-0113c42a9fe5" containerName="collect-profiles" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.440652 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.442239 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.443873 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.445239 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.472859 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.473073 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.973040331 +0000 UTC m=+163.095647903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.473243 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.473645 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:29.973637507 +0000 UTC m=+163.096245079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.574150 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.574451 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.07440118 +0000 UTC m=+163.197008772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.574542 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72a233fe-7217-45f3-90b4-712d26ba915f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.575175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a233fe-7217-45f3-90b4-712d26ba915f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.575291 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.575673 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.075658665 +0000 UTC m=+163.198266237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.676541 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.676917 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72a233fe-7217-45f3-90b4-712d26ba915f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.677046 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a233fe-7217-45f3-90b4-712d26ba915f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.677187 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.177168299 +0000 UTC m=+163.299775871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.677203 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a233fe-7217-45f3-90b4-712d26ba915f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.711945 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72a233fe-7217-45f3-90b4-712d26ba915f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.760778 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.778415 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.778826 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.278809065 +0000 UTC m=+163.401416637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.879644 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.880130 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.380108054 +0000 UTC m=+163.502715626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.981731 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:29 crc kubenswrapper[4801]: E1206 03:08:29.982443 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.482422099 +0000 UTC m=+163.605029671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:29 crc kubenswrapper[4801]: I1206 03:08:29.996498 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 03:08:30 crc kubenswrapper[4801]: W1206 03:08:30.006322 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod72a233fe_7217_45f3_90b4_712d26ba915f.slice/crio-3ed7325e608b628860c6f68c0ffff6d301197c332c22d31cce081f734f1fe79b WatchSource:0}: Error finding container 3ed7325e608b628860c6f68c0ffff6d301197c332c22d31cce081f734f1fe79b: Status 404 returned error can't find the container with id 3ed7325e608b628860c6f68c0ffff6d301197c332c22d31cce081f734f1fe79b Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.082126 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerStarted","Data":"3f5706f1c7cf21d5d801d5ebed7ea6018ed5951248b0de56b8e8ee7e8c49611e"} Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.082741 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.083098 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.583065479 +0000 UTC m=+163.705673051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.083365 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.084472 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.584449397 +0000 UTC m=+163.707056969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.084949 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.085087 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28" event={"ID":"630057a4-ba0a-485b-8ac1-0113c42a9fe5","Type":"ContainerDied","Data":"80784d9c614f23a2f51d751b2ac5efe74c8111356a8ea3196cc8e410bff6da7a"} Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.085212 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80784d9c614f23a2f51d751b2ac5efe74c8111356a8ea3196cc8e410bff6da7a" Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.088134 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" event={"ID":"6c18f03b-59b4-4759-ae52-198497bc084d","Type":"ContainerStarted","Data":"58ae47cc19eed8f83932b57e0fc5538b87e31eb7ad93870c6e44fee991b7f6a8"} Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.130044 4801 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.184973 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.185178 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.685141297 +0000 UTC m=+163.807748869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.185508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.185920 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.685911449 +0000 UTC m=+163.808519021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.287271 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.287630 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.787557637 +0000 UTC m=+163.910165219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.287821 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.288330 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.788321087 +0000 UTC m=+163.910928659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.389721 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.390538 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.890217161 +0000 UTC m=+164.012824733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.417804 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:30 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:30 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:30 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.417896 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.491716 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.492167 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:30.992154247 +0000 UTC m=+164.114761809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.592475 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.592879 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:31.092862748 +0000 UTC m=+164.215470320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.694867 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.695404 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:31.195382819 +0000 UTC m=+164.317990391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.796382 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.796609 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:31.296575404 +0000 UTC m=+164.419182976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.799259 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.800398 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:31.300375089 +0000 UTC m=+164.422982661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.901325 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.901546 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 03:08:31.401508382 +0000 UTC m=+164.524115954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.901637 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:30 crc kubenswrapper[4801]: E1206 03:08:30.902209 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 03:08:31.402188541 +0000 UTC m=+164.524796113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-96wqb" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.957632 4801 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-06T03:08:30.130477343Z","Handler":null,"Name":""} Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.961468 4801 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 06 03:08:30 crc kubenswrapper[4801]: I1206 03:08:30.961534 4801 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.002795 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.007858 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.097917 4801 generic.go:334] "Generic (PLEG): container finished" podID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerID="3f5706f1c7cf21d5d801d5ebed7ea6018ed5951248b0de56b8e8ee7e8c49611e" exitCode=0 Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.097999 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerDied","Data":"3f5706f1c7cf21d5d801d5ebed7ea6018ed5951248b0de56b8e8ee7e8c49611e"} Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.099889 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72a233fe-7217-45f3-90b4-712d26ba915f","Type":"ContainerStarted","Data":"3ed7325e608b628860c6f68c0ffff6d301197c332c22d31cce081f734f1fe79b"} Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.104593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.223836 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.340869 4801 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.341297 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.420157 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:31 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:31 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:31 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.420241 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.529389 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-96wqb\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:31 crc kubenswrapper[4801]: I1206 03:08:31.652119 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.109724 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c","Type":"ContainerStarted","Data":"aa6790afe920b9f159cda02aa3933fa13aa5216d94a8995f9b7f70a18d3f6a6c"} Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.113993 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72a233fe-7217-45f3-90b4-712d26ba915f","Type":"ContainerStarted","Data":"9bbbf3ab0f914f3405eb34ebe950a4c7c86349606b1f3b2519bea86bef872e50"} Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.116139 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.121119 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-psxlr" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.154366 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=7.154341849 podStartE2EDuration="7.154341849s" podCreationTimestamp="2025-12-06 03:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:32.13037294 +0000 UTC m=+165.252980512" watchObservedRunningTime="2025-12-06 03:08:32.154341849 +0000 UTC m=+165.276949421" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.174100 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" podStartSLOduration=143.174072602 podStartE2EDuration="2m23.174072602s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:32.154848973 +0000 UTC m=+165.277456565" watchObservedRunningTime="2025-12-06 03:08:32.174072602 +0000 UTC m=+165.296680164" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.225032 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zstqj" podStartSLOduration=143.224998833 podStartE2EDuration="2m23.224998833s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:32.197367324 +0000 UTC m=+165.319974896" watchObservedRunningTime="2025-12-06 03:08:32.224998833 +0000 UTC m=+165.347606405" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.272092 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-psxlr" podStartSLOduration=24.272068499 podStartE2EDuration="24.272068499s" podCreationTimestamp="2025-12-06 03:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:32.267811182 +0000 UTC m=+165.390418744" watchObservedRunningTime="2025-12-06 03:08:32.272068499 +0000 UTC m=+165.394676071" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.316434 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-pz6qq" podStartSLOduration=143.316408499 podStartE2EDuration="2m23.316408499s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:32.314199068 +0000 UTC m=+165.436806640" watchObservedRunningTime="2025-12-06 03:08:32.316408499 +0000 UTC m=+165.439016071" Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.417468 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:32 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:32 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:32 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:32 crc kubenswrapper[4801]: I1206 03:08:32.417575 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.053965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.062640 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/134354b0-1613-4536-aaf8-4e5ad12705f9-metrics-certs\") pod \"network-metrics-daemon-wpnbx\" (UID: \"134354b0-1613-4536-aaf8-4e5ad12705f9\") " pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.290825 4801 patch_prober.go:28] interesting pod/console-f9d7485db-qnr4c container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.290897 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qnr4c" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.336872 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-wpnbx" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.345354 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-l87sx" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.415568 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:33 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:33 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:33 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.415653 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:33 crc kubenswrapper[4801]: I1206 03:08:33.475703 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:34 crc kubenswrapper[4801]: I1206 03:08:34.416311 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:34 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:34 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:34 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:34 crc kubenswrapper[4801]: I1206 03:08:34.416401 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:34 crc kubenswrapper[4801]: I1206 03:08:34.882058 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:34 crc kubenswrapper[4801]: I1206 03:08:34.882136 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:34 crc kubenswrapper[4801]: I1206 03:08:34.895455 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:35 crc kubenswrapper[4801]: I1206 03:08:35.134670 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-gcgft" Dec 06 03:08:35 crc kubenswrapper[4801]: I1206 03:08:35.149687 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.149572486 podStartE2EDuration="6.149572486s" podCreationTimestamp="2025-12-06 03:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:08:35.14789754 +0000 UTC m=+168.270505152" watchObservedRunningTime="2025-12-06 03:08:35.149572486 +0000 UTC m=+168.272180068" Dec 06 03:08:35 crc kubenswrapper[4801]: I1206 03:08:35.416867 4801 patch_prober.go:28] interesting pod/router-default-5444994796-k47rq container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 03:08:35 crc kubenswrapper[4801]: [-]has-synced failed: reason withheld Dec 06 03:08:35 crc kubenswrapper[4801]: [+]process-running ok Dec 06 03:08:35 crc kubenswrapper[4801]: healthz check failed Dec 06 03:08:35 crc kubenswrapper[4801]: I1206 03:08:35.417288 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k47rq" podUID="985b208d-91e2-4e10-b919-0ef77ba89163" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 03:08:36 crc kubenswrapper[4801]: I1206 03:08:36.137389 4801 generic.go:334] "Generic (PLEG): container finished" podID="36cef5b2-4531-4e8b-b3f9-67539cd8ac7c" containerID="aa6790afe920b9f159cda02aa3933fa13aa5216d94a8995f9b7f70a18d3f6a6c" exitCode=0 Dec 06 03:08:36 crc kubenswrapper[4801]: I1206 03:08:36.137472 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c","Type":"ContainerDied","Data":"aa6790afe920b9f159cda02aa3933fa13aa5216d94a8995f9b7f70a18d3f6a6c"} Dec 06 03:08:36 crc kubenswrapper[4801]: I1206 03:08:36.139380 4801 generic.go:334] "Generic (PLEG): container finished" podID="72a233fe-7217-45f3-90b4-712d26ba915f" containerID="9bbbf3ab0f914f3405eb34ebe950a4c7c86349606b1f3b2519bea86bef872e50" exitCode=0 Dec 06 03:08:36 crc kubenswrapper[4801]: I1206 03:08:36.139485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72a233fe-7217-45f3-90b4-712d26ba915f","Type":"ContainerDied","Data":"9bbbf3ab0f914f3405eb34ebe950a4c7c86349606b1f3b2519bea86bef872e50"} Dec 06 03:08:36 crc kubenswrapper[4801]: I1206 03:08:36.416527 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:36 crc kubenswrapper[4801]: I1206 03:08:36.419003 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k47rq" Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.362804 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.453637 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72a233fe-7217-45f3-90b4-712d26ba915f-kube-api-access\") pod \"72a233fe-7217-45f3-90b4-712d26ba915f\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.454299 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a233fe-7217-45f3-90b4-712d26ba915f-kubelet-dir\") pod \"72a233fe-7217-45f3-90b4-712d26ba915f\" (UID: \"72a233fe-7217-45f3-90b4-712d26ba915f\") " Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.454444 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a233fe-7217-45f3-90b4-712d26ba915f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "72a233fe-7217-45f3-90b4-712d26ba915f" (UID: "72a233fe-7217-45f3-90b4-712d26ba915f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.454982 4801 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/72a233fe-7217-45f3-90b4-712d26ba915f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.459938 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a233fe-7217-45f3-90b4-712d26ba915f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "72a233fe-7217-45f3-90b4-712d26ba915f" (UID: "72a233fe-7217-45f3-90b4-712d26ba915f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:08:39 crc kubenswrapper[4801]: I1206 03:08:39.556724 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/72a233fe-7217-45f3-90b4-712d26ba915f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.163616 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"72a233fe-7217-45f3-90b4-712d26ba915f","Type":"ContainerDied","Data":"3ed7325e608b628860c6f68c0ffff6d301197c332c22d31cce081f734f1fe79b"} Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.163658 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed7325e608b628860c6f68c0ffff6d301197c332c22d31cce081f734f1fe79b" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.163718 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.603417 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.672211 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kubelet-dir\") pod \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.672383 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kube-api-access\") pod \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\" (UID: \"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c\") " Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.673286 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "36cef5b2-4531-4e8b-b3f9-67539cd8ac7c" (UID: "36cef5b2-4531-4e8b-b3f9-67539cd8ac7c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.676406 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "36cef5b2-4531-4e8b-b3f9-67539cd8ac7c" (UID: "36cef5b2-4531-4e8b-b3f9-67539cd8ac7c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.773363 4801 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:40 crc kubenswrapper[4801]: I1206 03:08:40.773402 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/36cef5b2-4531-4e8b-b3f9-67539cd8ac7c-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:08:41 crc kubenswrapper[4801]: I1206 03:08:41.169901 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:08:41 crc kubenswrapper[4801]: I1206 03:08:41.170365 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:08:41 crc kubenswrapper[4801]: I1206 03:08:41.170797 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"36cef5b2-4531-4e8b-b3f9-67539cd8ac7c","Type":"ContainerDied","Data":"f42bfaee2d691a97a1f2f476635575e378db2275c2e3574f339f8f7793c34f35"} Dec 06 03:08:41 crc kubenswrapper[4801]: I1206 03:08:41.170825 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42bfaee2d691a97a1f2f476635575e378db2275c2e3574f339f8f7793c34f35" Dec 06 03:08:41 crc kubenswrapper[4801]: I1206 03:08:41.170878 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 03:08:46 crc kubenswrapper[4801]: I1206 03:08:46.893099 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:46 crc kubenswrapper[4801]: I1206 03:08:46.897310 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:08:53 crc kubenswrapper[4801]: I1206 03:08:53.480624 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-hvl68" Dec 06 03:08:55 crc kubenswrapper[4801]: I1206 03:08:55.743941 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.825741 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 03:09:01 crc kubenswrapper[4801]: E1206 03:09:01.827138 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cef5b2-4531-4e8b-b3f9-67539cd8ac7c" containerName="pruner" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.827158 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cef5b2-4531-4e8b-b3f9-67539cd8ac7c" containerName="pruner" Dec 06 03:09:01 crc kubenswrapper[4801]: E1206 03:09:01.827174 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a233fe-7217-45f3-90b4-712d26ba915f" containerName="pruner" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.827182 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a233fe-7217-45f3-90b4-712d26ba915f" containerName="pruner" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.827509 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a233fe-7217-45f3-90b4-712d26ba915f" containerName="pruner" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.827532 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cef5b2-4531-4e8b-b3f9-67539cd8ac7c" containerName="pruner" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.828053 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.830042 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.830600 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.836932 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.981725 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:01 crc kubenswrapper[4801]: I1206 03:09:01.981821 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:02 crc kubenswrapper[4801]: I1206 03:09:02.083209 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:02 crc kubenswrapper[4801]: I1206 03:09:02.083312 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:02 crc kubenswrapper[4801]: I1206 03:09:02.083475 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:02 crc kubenswrapper[4801]: I1206 03:09:02.123226 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:02 crc kubenswrapper[4801]: I1206 03:09:02.151741 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: E1206 03:09:07.348838 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 03:09:07 crc kubenswrapper[4801]: E1206 03:09:07.349446 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6qc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-fn52d_openshift-marketplace(1229f263-2232-4e9c-b2ac-4eabe1b3ee7b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 03:09:07 crc kubenswrapper[4801]: E1206 03:09:07.350764 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-fn52d" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.437128 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.437955 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.440641 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.563838 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.564413 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-var-lock\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.564466 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73230722-21f5-42a5-9ffb-8856120e8ecb-kube-api-access\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.666805 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-var-lock\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.666949 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73230722-21f5-42a5-9ffb-8856120e8ecb-kube-api-access\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.666970 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-var-lock\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.667000 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.667069 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:07 crc kubenswrapper[4801]: I1206 03:09:07.774610 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73230722-21f5-42a5-9ffb-8856120e8ecb-kube-api-access\") pod \"installer-9-crc\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:08 crc kubenswrapper[4801]: I1206 03:09:08.064821 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.775802 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-fn52d" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.859931 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.860134 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2qms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-htf5h_openshift-marketplace(83259a75-730d-4f15-8a2f-d8be13ec335a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.862175 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-htf5h" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.869055 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.869214 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjrqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-l77v9_openshift-marketplace(98beccef-be81-4934-b000-a41b741ed810): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 03:09:10 crc kubenswrapper[4801]: E1206 03:09:10.870435 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-l77v9" podUID="98beccef-be81-4934-b000-a41b741ed810" Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.170192 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.170689 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.170742 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.171444 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.171571 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5" gracePeriod=600 Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.249317 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-wpnbx"] Dec 06 03:09:11 crc kubenswrapper[4801]: W1206 03:09:11.260114 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134354b0_1613_4536_aaf8_4e5ad12705f9.slice/crio-22d36a9a5edcdd7bc6369aa6e29b3c8b397ad0ded0070d3adb6cdb3b38d64718 WatchSource:0}: Error finding container 22d36a9a5edcdd7bc6369aa6e29b3c8b397ad0ded0070d3adb6cdb3b38d64718: Status 404 returned error can't find the container with id 22d36a9a5edcdd7bc6369aa6e29b3c8b397ad0ded0070d3adb6cdb3b38d64718 Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.321554 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96wqb"] Dec 06 03:09:11 crc kubenswrapper[4801]: W1206 03:09:11.329313 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40e9cd29_e5e3_44c5_95e8_d6e799eaf1f5.slice/crio-8ec14393e4a7fd769476ffe4aba75bdd68f37afcae20d7d4096ec001fcec599a WatchSource:0}: Error finding container 8ec14393e4a7fd769476ffe4aba75bdd68f37afcae20d7d4096ec001fcec599a: Status 404 returned error can't find the container with id 8ec14393e4a7fd769476ffe4aba75bdd68f37afcae20d7d4096ec001fcec599a Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.335613 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" event={"ID":"134354b0-1613-4536-aaf8-4e5ad12705f9","Type":"ContainerStarted","Data":"22d36a9a5edcdd7bc6369aa6e29b3c8b397ad0ded0070d3adb6cdb3b38d64718"} Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.386415 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 03:09:11 crc kubenswrapper[4801]: I1206 03:09:11.392501 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 03:09:11 crc kubenswrapper[4801]: W1206 03:09:11.398924 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf7338dd1_5098_43d3_b2c1_9beb2ffc5885.slice/crio-05bfca2a99b2b2beb6689ec224e76744e145b7da6f83ce5dd7c1f4a9b0d4817f WatchSource:0}: Error finding container 05bfca2a99b2b2beb6689ec224e76744e145b7da6f83ce5dd7c1f4a9b0d4817f: Status 404 returned error can't find the container with id 05bfca2a99b2b2beb6689ec224e76744e145b7da6f83ce5dd7c1f4a9b0d4817f Dec 06 03:09:11 crc kubenswrapper[4801]: W1206 03:09:11.425641 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod73230722_21f5_42a5_9ffb_8856120e8ecb.slice/crio-92912e24a6850aa86cf63f4e309011ac3c1c6a31bcf7603ea58553c7aba3432d WatchSource:0}: Error finding container 92912e24a6850aa86cf63f4e309011ac3c1c6a31bcf7603ea58553c7aba3432d: Status 404 returned error can't find the container with id 92912e24a6850aa86cf63f4e309011ac3c1c6a31bcf7603ea58553c7aba3432d Dec 06 03:09:12 crc kubenswrapper[4801]: I1206 03:09:12.347804 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5" exitCode=0 Dec 06 03:09:12 crc kubenswrapper[4801]: I1206 03:09:12.347886 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5"} Dec 06 03:09:12 crc kubenswrapper[4801]: I1206 03:09:12.350662 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" event={"ID":"6c18f03b-59b4-4759-ae52-198497bc084d","Type":"ContainerStarted","Data":"cb0ff4cdfb3f9534bafb96a961a99c3e2014aa36b1c8fe446f21ef775ba66032"} Dec 06 03:09:12 crc kubenswrapper[4801]: I1206 03:09:12.352171 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7338dd1-5098-43d3-b2c1-9beb2ffc5885","Type":"ContainerStarted","Data":"05bfca2a99b2b2beb6689ec224e76744e145b7da6f83ce5dd7c1f4a9b0d4817f"} Dec 06 03:09:12 crc kubenswrapper[4801]: I1206 03:09:12.353081 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" event={"ID":"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5","Type":"ContainerStarted","Data":"8ec14393e4a7fd769476ffe4aba75bdd68f37afcae20d7d4096ec001fcec599a"} Dec 06 03:09:12 crc kubenswrapper[4801]: I1206 03:09:12.354931 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73230722-21f5-42a5-9ffb-8856120e8ecb","Type":"ContainerStarted","Data":"92912e24a6850aa86cf63f4e309011ac3c1c6a31bcf7603ea58553c7aba3432d"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.371680 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"8cb9cda2b5ef7be9aa14d9ed5af31e70042e45e618144723dbce6c2cbb236c06"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.377854 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" event={"ID":"6c18f03b-59b4-4759-ae52-198497bc084d","Type":"ContainerStarted","Data":"982eec80fb3105057b86c65d825b24e635c8e7fb140a08bb5f5f7cb75b2eb6af"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.392400 4801 generic.go:334] "Generic (PLEG): container finished" podID="f7338dd1-5098-43d3-b2c1-9beb2ffc5885" containerID="2ac75b3fb353ac6300b81118fa62e7f7000e0939fc1d51d23b13779074c7ea66" exitCode=0 Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.392467 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7338dd1-5098-43d3-b2c1-9beb2ffc5885","Type":"ContainerDied","Data":"2ac75b3fb353ac6300b81118fa62e7f7000e0939fc1d51d23b13779074c7ea66"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.395374 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" event={"ID":"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5","Type":"ContainerStarted","Data":"1d4a58ca413928d9c8f6a37d3c282d9ccfc83a30d36bb807d6cdec371c34dd91"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.395467 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.398741 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73230722-21f5-42a5-9ffb-8856120e8ecb","Type":"ContainerStarted","Data":"153446c23056948e5b03545c57be4c629c818bc5b011e7c28fd59a9ce41fca02"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.413740 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" event={"ID":"134354b0-1613-4536-aaf8-4e5ad12705f9","Type":"ContainerStarted","Data":"c324c79c4e64df1bcacfbd6331a28483cb050921d93989ec85ed4e98c3830d90"} Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.424003 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-d79m7" podStartSLOduration=65.423974734 podStartE2EDuration="1m5.423974734s" podCreationTimestamp="2025-12-06 03:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:09:13.414570044 +0000 UTC m=+206.537177616" watchObservedRunningTime="2025-12-06 03:09:13.423974734 +0000 UTC m=+206.546582306" Dec 06 03:09:13 crc kubenswrapper[4801]: I1206 03:09:13.445837 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" podStartSLOduration=184.445817417 podStartE2EDuration="3m4.445817417s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:09:13.445318051 +0000 UTC m=+206.567925623" watchObservedRunningTime="2025-12-06 03:09:13.445817417 +0000 UTC m=+206.568424989" Dec 06 03:09:16 crc kubenswrapper[4801]: I1206 03:09:16.960492 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:16 crc kubenswrapper[4801]: I1206 03:09:16.974625 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.974603724 podStartE2EDuration="9.974603724s" podCreationTimestamp="2025-12-06 03:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:09:13.482030762 +0000 UTC m=+206.604638354" watchObservedRunningTime="2025-12-06 03:09:16.974603724 +0000 UTC m=+210.097211296" Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.011819 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kubelet-dir\") pod \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.012266 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kube-api-access\") pod \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\" (UID: \"f7338dd1-5098-43d3-b2c1-9beb2ffc5885\") " Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.012031 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f7338dd1-5098-43d3-b2c1-9beb2ffc5885" (UID: "f7338dd1-5098-43d3-b2c1-9beb2ffc5885"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.012538 4801 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.018114 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f7338dd1-5098-43d3-b2c1-9beb2ffc5885" (UID: "f7338dd1-5098-43d3-b2c1-9beb2ffc5885"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.114099 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f7338dd1-5098-43d3-b2c1-9beb2ffc5885-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.436797 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f7338dd1-5098-43d3-b2c1-9beb2ffc5885","Type":"ContainerDied","Data":"05bfca2a99b2b2beb6689ec224e76744e145b7da6f83ce5dd7c1f4a9b0d4817f"} Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.436856 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05bfca2a99b2b2beb6689ec224e76744e145b7da6f83ce5dd7c1f4a9b0d4817f" Dec 06 03:09:17 crc kubenswrapper[4801]: I1206 03:09:17.436864 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.450872 4801 generic.go:334] "Generic (PLEG): container finished" podID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerID="16fa8d415a95f2c116ea61584d356d0094022dd37011ba57524557f922407acd" exitCode=0 Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.450933 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fv89x" event={"ID":"2251dd16-904f-4bf6-aac8-3a82a0778689","Type":"ContainerDied","Data":"16fa8d415a95f2c116ea61584d356d0094022dd37011ba57524557f922407acd"} Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.454988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-wpnbx" event={"ID":"134354b0-1613-4536-aaf8-4e5ad12705f9","Type":"ContainerStarted","Data":"9f1b6b023e04362674e0f94751c00aef4f3b8a5a1334432bb65d548b27b8f7d8"} Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.456375 4801 generic.go:334] "Generic (PLEG): container finished" podID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerID="aef97fa14413311e60481dcbb5761222a29df1bdf1531235975b6f1fb5e63257" exitCode=0 Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.456434 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerDied","Data":"aef97fa14413311e60481dcbb5761222a29df1bdf1531235975b6f1fb5e63257"} Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.460325 4801 generic.go:334] "Generic (PLEG): container finished" podID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerID="2c33f80586ea62b4ccfb98769acafba56838d13fdfa0c53217ac431b7b1c426d" exitCode=0 Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.460485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfbpn" event={"ID":"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989","Type":"ContainerDied","Data":"2c33f80586ea62b4ccfb98769acafba56838d13fdfa0c53217ac431b7b1c426d"} Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.463177 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerStarted","Data":"f226db75da46d5d849376af707c45b487562d51add1d9394b47d8151c404febf"} Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.464635 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerStarted","Data":"76af69546af01f5928c4c7e1e3f06caa6cee56b2931231c2e054d38f509702b6"} Dec 06 03:09:19 crc kubenswrapper[4801]: I1206 03:09:19.547298 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-wpnbx" podStartSLOduration=190.547274262 podStartE2EDuration="3m10.547274262s" podCreationTimestamp="2025-12-06 03:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:09:19.531126396 +0000 UTC m=+212.653733978" watchObservedRunningTime="2025-12-06 03:09:19.547274262 +0000 UTC m=+212.669881834" Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.471957 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfbpn" event={"ID":"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989","Type":"ContainerStarted","Data":"687327d71fcebb40b983770cf5c5dc9ff3409445725a6853a00ed28b5df79a8c"} Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.475987 4801 generic.go:334] "Generic (PLEG): container finished" podID="73474c40-4e21-4384-94be-94d4015e7668" containerID="f226db75da46d5d849376af707c45b487562d51add1d9394b47d8151c404febf" exitCode=0 Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.476045 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerDied","Data":"f226db75da46d5d849376af707c45b487562d51add1d9394b47d8151c404febf"} Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.482017 4801 generic.go:334] "Generic (PLEG): container finished" podID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerID="76af69546af01f5928c4c7e1e3f06caa6cee56b2931231c2e054d38f509702b6" exitCode=0 Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.482076 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerDied","Data":"76af69546af01f5928c4c7e1e3f06caa6cee56b2931231c2e054d38f509702b6"} Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.486070 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fv89x" event={"ID":"2251dd16-904f-4bf6-aac8-3a82a0778689","Type":"ContainerStarted","Data":"eab10fa9a1948f98879a367260cb00255d569f7d4c955d11778f83ed47755539"} Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.488543 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerStarted","Data":"4356a6dc5053a7dcd2ecfcb26ed1c4a68856b99aeb6af91589873e1e78947608"} Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.499390 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qfbpn" podStartSLOduration=19.917183409 podStartE2EDuration="1m0.499367266s" podCreationTimestamp="2025-12-06 03:08:20 +0000 UTC" firstStartedPulling="2025-12-06 03:08:39.305049813 +0000 UTC m=+172.427657385" lastFinishedPulling="2025-12-06 03:09:19.88723367 +0000 UTC m=+213.009841242" observedRunningTime="2025-12-06 03:09:20.497492219 +0000 UTC m=+213.620099791" watchObservedRunningTime="2025-12-06 03:09:20.499367266 +0000 UTC m=+213.621974838" Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.568263 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fv89x" podStartSLOduration=21.888683131 podStartE2EDuration="1m2.568237407s" podCreationTimestamp="2025-12-06 03:08:18 +0000 UTC" firstStartedPulling="2025-12-06 03:08:39.30533061 +0000 UTC m=+172.427938182" lastFinishedPulling="2025-12-06 03:09:19.984884886 +0000 UTC m=+213.107492458" observedRunningTime="2025-12-06 03:09:20.549440288 +0000 UTC m=+213.672047880" watchObservedRunningTime="2025-12-06 03:09:20.568237407 +0000 UTC m=+213.690844979" Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.790273 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:09:20 crc kubenswrapper[4801]: I1206 03:09:20.790363 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.511652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerStarted","Data":"94a94c67049b7d911caa380a3a59c9252c9b09118108fbff4855adb87f9de09e"} Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.528913 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerStarted","Data":"035c9721736274212fae746c382e2cadccde5adeacb625cecac1d2d75e4a5178"} Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.535357 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8t2r5" podStartSLOduration=18.977817088 podStartE2EDuration="1m0.535338453s" podCreationTimestamp="2025-12-06 03:08:21 +0000 UTC" firstStartedPulling="2025-12-06 03:08:39.304749385 +0000 UTC m=+172.427356957" lastFinishedPulling="2025-12-06 03:09:20.86227075 +0000 UTC m=+213.984878322" observedRunningTime="2025-12-06 03:09:21.535002132 +0000 UTC m=+214.657609704" watchObservedRunningTime="2025-12-06 03:09:21.535338453 +0000 UTC m=+214.657946025" Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.538022 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ffnmp" podStartSLOduration=20.966173386 podStartE2EDuration="1m1.537997974s" podCreationTimestamp="2025-12-06 03:08:20 +0000 UTC" firstStartedPulling="2025-12-06 03:08:39.304885238 +0000 UTC m=+172.427492810" lastFinishedPulling="2025-12-06 03:09:19.876709826 +0000 UTC m=+212.999317398" observedRunningTime="2025-12-06 03:09:20.586490339 +0000 UTC m=+213.709097911" watchObservedRunningTime="2025-12-06 03:09:21.537997974 +0000 UTC m=+214.660605546" Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.554787 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvjj9" podStartSLOduration=18.841029858 podStartE2EDuration="1m0.554768191s" podCreationTimestamp="2025-12-06 03:08:21 +0000 UTC" firstStartedPulling="2025-12-06 03:08:39.303932582 +0000 UTC m=+172.426540154" lastFinishedPulling="2025-12-06 03:09:21.017670915 +0000 UTC m=+214.140278487" observedRunningTime="2025-12-06 03:09:21.552554603 +0000 UTC m=+214.675162175" watchObservedRunningTime="2025-12-06 03:09:21.554768191 +0000 UTC m=+214.677375763" Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.636093 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.636181 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:09:21 crc kubenswrapper[4801]: I1206 03:09:21.857744 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qfbpn" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="registry-server" probeResult="failure" output=< Dec 06 03:09:21 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 03:09:21 crc kubenswrapper[4801]: > Dec 06 03:09:22 crc kubenswrapper[4801]: I1206 03:09:22.043548 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:09:22 crc kubenswrapper[4801]: I1206 03:09:22.043609 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:09:22 crc kubenswrapper[4801]: I1206 03:09:22.693266 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8t2r5" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="registry-server" probeResult="failure" output=< Dec 06 03:09:22 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 03:09:22 crc kubenswrapper[4801]: > Dec 06 03:09:23 crc kubenswrapper[4801]: I1206 03:09:23.088630 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pvjj9" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="registry-server" probeResult="failure" output=< Dec 06 03:09:23 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 03:09:23 crc kubenswrapper[4801]: > Dec 06 03:09:23 crc kubenswrapper[4801]: I1206 03:09:23.550281 4801 generic.go:334] "Generic (PLEG): container finished" podID="98beccef-be81-4934-b000-a41b741ed810" containerID="cfe1c8b5e991a9f391b3839fe2f87170766dca412949ebd55f797b7c01b4c1f0" exitCode=0 Dec 06 03:09:23 crc kubenswrapper[4801]: I1206 03:09:23.550367 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerDied","Data":"cfe1c8b5e991a9f391b3839fe2f87170766dca412949ebd55f797b7c01b4c1f0"} Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.297517 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.299122 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.327971 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cqsjn"] Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.401199 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.595197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerStarted","Data":"00046d95d661d68549ce20c410f2a2e754726746ffe22ae466de5b102815bde5"} Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.617192 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l77v9" podStartSLOduration=8.366517623 podStartE2EDuration="1m11.617172562s" podCreationTimestamp="2025-12-06 03:08:18 +0000 UTC" firstStartedPulling="2025-12-06 03:08:24.842650316 +0000 UTC m=+157.965257888" lastFinishedPulling="2025-12-06 03:09:28.093305255 +0000 UTC m=+221.215912827" observedRunningTime="2025-12-06 03:09:29.615340126 +0000 UTC m=+222.737947698" watchObservedRunningTime="2025-12-06 03:09:29.617172562 +0000 UTC m=+222.739780134" Dec 06 03:09:29 crc kubenswrapper[4801]: I1206 03:09:29.659228 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.444351 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.445665 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.491684 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.603052 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerStarted","Data":"2c0af5d7c87a5b0cbf1c770b0e60bb544a8129a3d2302ac92e8c88a9d52a5934"} Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.607152 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerStarted","Data":"ca8a2fd52ca0ae4feffaa5233f821c38586d6ead6a2b5b769436cb9a047148bd"} Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.651520 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.845583 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:09:30 crc kubenswrapper[4801]: I1206 03:09:30.888705 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.614104 4801 generic.go:334] "Generic (PLEG): container finished" podID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerID="2c0af5d7c87a5b0cbf1c770b0e60bb544a8129a3d2302ac92e8c88a9d52a5934" exitCode=0 Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.614196 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerDied","Data":"2c0af5d7c87a5b0cbf1c770b0e60bb544a8129a3d2302ac92e8c88a9d52a5934"} Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.616290 4801 generic.go:334] "Generic (PLEG): container finished" podID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerID="ca8a2fd52ca0ae4feffaa5233f821c38586d6ead6a2b5b769436cb9a047148bd" exitCode=0 Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.616523 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerDied","Data":"ca8a2fd52ca0ae4feffaa5233f821c38586d6ead6a2b5b769436cb9a047148bd"} Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.662643 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.680477 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.736908 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.887618 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fv89x"] Dec 06 03:09:31 crc kubenswrapper[4801]: I1206 03:09:31.888058 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fv89x" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="registry-server" containerID="cri-o://eab10fa9a1948f98879a367260cb00255d569f7d4c955d11778f83ed47755539" gracePeriod=2 Dec 06 03:09:32 crc kubenswrapper[4801]: I1206 03:09:32.086892 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:09:32 crc kubenswrapper[4801]: I1206 03:09:32.128788 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:09:33 crc kubenswrapper[4801]: I1206 03:09:33.687026 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfbpn"] Dec 06 03:09:33 crc kubenswrapper[4801]: I1206 03:09:33.687658 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qfbpn" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="registry-server" containerID="cri-o://687327d71fcebb40b983770cf5c5dc9ff3409445725a6853a00ed28b5df79a8c" gracePeriod=2 Dec 06 03:09:34 crc kubenswrapper[4801]: I1206 03:09:34.980231 4801 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-p8b96 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 03:09:34 crc kubenswrapper[4801]: I1206 03:09:34.980322 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-p8b96" podUID="d58c5185-9cfb-4e5f-956e-d12e12b5e81e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 03:09:35 crc kubenswrapper[4801]: I1206 03:09:35.666900 4801 generic.go:334] "Generic (PLEG): container finished" podID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerID="eab10fa9a1948f98879a367260cb00255d569f7d4c955d11778f83ed47755539" exitCode=0 Dec 06 03:09:35 crc kubenswrapper[4801]: I1206 03:09:35.667023 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fv89x" event={"ID":"2251dd16-904f-4bf6-aac8-3a82a0778689","Type":"ContainerDied","Data":"eab10fa9a1948f98879a367260cb00255d569f7d4c955d11778f83ed47755539"} Dec 06 03:09:35 crc kubenswrapper[4801]: I1206 03:09:35.672139 4801 generic.go:334] "Generic (PLEG): container finished" podID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerID="687327d71fcebb40b983770cf5c5dc9ff3409445725a6853a00ed28b5df79a8c" exitCode=0 Dec 06 03:09:35 crc kubenswrapper[4801]: I1206 03:09:35.672184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfbpn" event={"ID":"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989","Type":"ContainerDied","Data":"687327d71fcebb40b983770cf5c5dc9ff3409445725a6853a00ed28b5df79a8c"} Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.087139 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvjj9"] Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.087492 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvjj9" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="registry-server" containerID="cri-o://035c9721736274212fae746c382e2cadccde5adeacb625cecac1d2d75e4a5178" gracePeriod=2 Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.552692 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.609360 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnhk9\" (UniqueName: \"kubernetes.io/projected/2251dd16-904f-4bf6-aac8-3a82a0778689-kube-api-access-qnhk9\") pod \"2251dd16-904f-4bf6-aac8-3a82a0778689\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.609526 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-utilities\") pod \"2251dd16-904f-4bf6-aac8-3a82a0778689\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.609570 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-catalog-content\") pod \"2251dd16-904f-4bf6-aac8-3a82a0778689\" (UID: \"2251dd16-904f-4bf6-aac8-3a82a0778689\") " Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.610726 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-utilities" (OuterVolumeSpecName: "utilities") pod "2251dd16-904f-4bf6-aac8-3a82a0778689" (UID: "2251dd16-904f-4bf6-aac8-3a82a0778689"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.616636 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2251dd16-904f-4bf6-aac8-3a82a0778689-kube-api-access-qnhk9" (OuterVolumeSpecName: "kube-api-access-qnhk9") pod "2251dd16-904f-4bf6-aac8-3a82a0778689" (UID: "2251dd16-904f-4bf6-aac8-3a82a0778689"). InnerVolumeSpecName "kube-api-access-qnhk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.653315 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2251dd16-904f-4bf6-aac8-3a82a0778689" (UID: "2251dd16-904f-4bf6-aac8-3a82a0778689"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.681167 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fv89x" event={"ID":"2251dd16-904f-4bf6-aac8-3a82a0778689","Type":"ContainerDied","Data":"348d8fd3637dfae59795914a12ccf72b5ca97637191c0c7f73ad8e3fccfe7c8b"} Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.681275 4801 scope.go:117] "RemoveContainer" containerID="eab10fa9a1948f98879a367260cb00255d569f7d4c955d11778f83ed47755539" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.681496 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fv89x" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.711138 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.711166 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2251dd16-904f-4bf6-aac8-3a82a0778689-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.711177 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnhk9\" (UniqueName: \"kubernetes.io/projected/2251dd16-904f-4bf6-aac8-3a82a0778689-kube-api-access-qnhk9\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.742035 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.749902 4801 scope.go:117] "RemoveContainer" containerID="16fa8d415a95f2c116ea61584d356d0094022dd37011ba57524557f922407acd" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.752930 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fv89x"] Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.756927 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fv89x"] Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.788416 4801 scope.go:117] "RemoveContainer" containerID="fc5849d161866678161798822c74c98d3cbbe7df253b953e29a875564abd6d00" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.812031 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-catalog-content\") pod \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.812107 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-utilities\") pod \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.812185 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sxtj\" (UniqueName: \"kubernetes.io/projected/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-kube-api-access-7sxtj\") pod \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\" (UID: \"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989\") " Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.813256 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-utilities" (OuterVolumeSpecName: "utilities") pod "d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" (UID: "d4f7d2a4-ea30-4b87-9bbb-cb7b89193989"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.815061 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-kube-api-access-7sxtj" (OuterVolumeSpecName: "kube-api-access-7sxtj") pod "d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" (UID: "d4f7d2a4-ea30-4b87-9bbb-cb7b89193989"). InnerVolumeSpecName "kube-api-access-7sxtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.835988 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" (UID: "d4f7d2a4-ea30-4b87-9bbb-cb7b89193989"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.914088 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.914125 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:36 crc kubenswrapper[4801]: I1206 03:09:36.914135 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sxtj\" (UniqueName: \"kubernetes.io/projected/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989-kube-api-access-7sxtj\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:37 crc kubenswrapper[4801]: I1206 03:09:37.220390 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" path="/var/lib/kubelet/pods/2251dd16-904f-4bf6-aac8-3a82a0778689/volumes" Dec 06 03:09:37 crc kubenswrapper[4801]: I1206 03:09:37.692223 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfbpn" event={"ID":"d4f7d2a4-ea30-4b87-9bbb-cb7b89193989","Type":"ContainerDied","Data":"df25a277338087fd6eeac14bdd8c64d2570ee1041b4039ada8615739bb211d14"} Dec 06 03:09:37 crc kubenswrapper[4801]: I1206 03:09:37.692299 4801 scope.go:117] "RemoveContainer" containerID="687327d71fcebb40b983770cf5c5dc9ff3409445725a6853a00ed28b5df79a8c" Dec 06 03:09:37 crc kubenswrapper[4801]: I1206 03:09:37.692249 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfbpn" Dec 06 03:09:37 crc kubenswrapper[4801]: I1206 03:09:37.716536 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfbpn"] Dec 06 03:09:37 crc kubenswrapper[4801]: I1206 03:09:37.720420 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfbpn"] Dec 06 03:09:38 crc kubenswrapper[4801]: I1206 03:09:38.698562 4801 generic.go:334] "Generic (PLEG): container finished" podID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerID="035c9721736274212fae746c382e2cadccde5adeacb625cecac1d2d75e4a5178" exitCode=0 Dec 06 03:09:38 crc kubenswrapper[4801]: I1206 03:09:38.698618 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerDied","Data":"035c9721736274212fae746c382e2cadccde5adeacb625cecac1d2d75e4a5178"} Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.221426 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" path="/var/lib/kubelet/pods/d4f7d2a4-ea30-4b87-9bbb-cb7b89193989/volumes" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.222375 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.222413 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.268165 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.564173 4801 scope.go:117] "RemoveContainer" containerID="2c33f80586ea62b4ccfb98769acafba56838d13fdfa0c53217ac431b7b1c426d" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.756772 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.883830 4801 scope.go:117] "RemoveContainer" containerID="94fd8316c699e5c7d3ce2241a8d957921b332dd08e292082bb930bb9f2532eb5" Dec 06 03:09:39 crc kubenswrapper[4801]: I1206 03:09:39.926215 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.056587 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-utilities\") pod \"aec29137-ee19-4a21-85d3-4efbf7cf342b\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.056662 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-catalog-content\") pod \"aec29137-ee19-4a21-85d3-4efbf7cf342b\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.056803 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pncc\" (UniqueName: \"kubernetes.io/projected/aec29137-ee19-4a21-85d3-4efbf7cf342b-kube-api-access-4pncc\") pod \"aec29137-ee19-4a21-85d3-4efbf7cf342b\" (UID: \"aec29137-ee19-4a21-85d3-4efbf7cf342b\") " Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.057544 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-utilities" (OuterVolumeSpecName: "utilities") pod "aec29137-ee19-4a21-85d3-4efbf7cf342b" (UID: "aec29137-ee19-4a21-85d3-4efbf7cf342b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.063178 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec29137-ee19-4a21-85d3-4efbf7cf342b-kube-api-access-4pncc" (OuterVolumeSpecName: "kube-api-access-4pncc") pod "aec29137-ee19-4a21-85d3-4efbf7cf342b" (UID: "aec29137-ee19-4a21-85d3-4efbf7cf342b"). InnerVolumeSpecName "kube-api-access-4pncc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.158955 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.159008 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pncc\" (UniqueName: \"kubernetes.io/projected/aec29137-ee19-4a21-85d3-4efbf7cf342b-kube-api-access-4pncc\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.464358 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aec29137-ee19-4a21-85d3-4efbf7cf342b" (UID: "aec29137-ee19-4a21-85d3-4efbf7cf342b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.565001 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aec29137-ee19-4a21-85d3-4efbf7cf342b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.720845 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvjj9" event={"ID":"aec29137-ee19-4a21-85d3-4efbf7cf342b","Type":"ContainerDied","Data":"fff3fbe8e40ef3b86024c3d37c1f6663f83b8a1f54786d42d9eaf969af71aea9"} Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.720949 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvjj9" Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.748766 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvjj9"] Dec 06 03:09:40 crc kubenswrapper[4801]: I1206 03:09:40.752674 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvjj9"] Dec 06 03:09:41 crc kubenswrapper[4801]: I1206 03:09:41.221224 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" path="/var/lib/kubelet/pods/aec29137-ee19-4a21-85d3-4efbf7cf342b/volumes" Dec 06 03:09:42 crc kubenswrapper[4801]: I1206 03:09:42.134525 4801 scope.go:117] "RemoveContainer" containerID="035c9721736274212fae746c382e2cadccde5adeacb625cecac1d2d75e4a5178" Dec 06 03:09:42 crc kubenswrapper[4801]: I1206 03:09:42.319225 4801 scope.go:117] "RemoveContainer" containerID="76af69546af01f5928c4c7e1e3f06caa6cee56b2931231c2e054d38f509702b6" Dec 06 03:09:42 crc kubenswrapper[4801]: I1206 03:09:42.349059 4801 scope.go:117] "RemoveContainer" containerID="3f5706f1c7cf21d5d801d5ebed7ea6018ed5951248b0de56b8e8ee7e8c49611e" Dec 06 03:09:43 crc kubenswrapper[4801]: I1206 03:09:43.752355 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerStarted","Data":"68ea112c77ed408c1b68b0c04ad1b04fb1233f53a3ff189033ea773d2f4f943e"} Dec 06 03:09:43 crc kubenswrapper[4801]: I1206 03:09:43.754966 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerStarted","Data":"78ea3635cf2d146a6158feefab0fd03d04fdb84e4f81a15d8731f577300ed4fe"} Dec 06 03:09:44 crc kubenswrapper[4801]: I1206 03:09:44.776147 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fn52d" podStartSLOduration=6.174062629 podStartE2EDuration="1m26.776129338s" podCreationTimestamp="2025-12-06 03:08:18 +0000 UTC" firstStartedPulling="2025-12-06 03:08:21.717198875 +0000 UTC m=+154.839806447" lastFinishedPulling="2025-12-06 03:09:42.319265584 +0000 UTC m=+235.441873156" observedRunningTime="2025-12-06 03:09:44.77358291 +0000 UTC m=+237.896190482" watchObservedRunningTime="2025-12-06 03:09:44.776129338 +0000 UTC m=+237.898736910" Dec 06 03:09:44 crc kubenswrapper[4801]: I1206 03:09:44.791928 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-htf5h" podStartSLOduration=8.425142261 podStartE2EDuration="1m26.791908874s" podCreationTimestamp="2025-12-06 03:08:18 +0000 UTC" firstStartedPulling="2025-12-06 03:08:23.838068779 +0000 UTC m=+156.960676351" lastFinishedPulling="2025-12-06 03:09:42.204835392 +0000 UTC m=+235.327442964" observedRunningTime="2025-12-06 03:09:44.790346246 +0000 UTC m=+237.912953818" watchObservedRunningTime="2025-12-06 03:09:44.791908874 +0000 UTC m=+237.914516446" Dec 06 03:09:48 crc kubenswrapper[4801]: I1206 03:09:48.567599 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:09:48 crc kubenswrapper[4801]: I1206 03:09:48.567700 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:09:48 crc kubenswrapper[4801]: I1206 03:09:48.606328 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:09:48 crc kubenswrapper[4801]: I1206 03:09:48.821170 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.223299 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.223371 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.258907 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.854529 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970093 4801 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970393 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="extract-utilities" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970410 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="extract-utilities" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970422 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="extract-utilities" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970430 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="extract-utilities" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970448 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="extract-content" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970457 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="extract-content" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970467 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="extract-utilities" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970475 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="extract-utilities" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970484 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="extract-content" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970492 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="extract-content" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970507 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970517 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970530 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="extract-content" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970538 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="extract-content" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970549 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970557 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970571 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970578 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: E1206 03:09:49.970590 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7338dd1-5098-43d3-b2c1-9beb2ffc5885" containerName="pruner" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970600 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7338dd1-5098-43d3-b2c1-9beb2ffc5885" containerName="pruner" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970728 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2251dd16-904f-4bf6-aac8-3a82a0778689" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970745 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec29137-ee19-4a21-85d3-4efbf7cf342b" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970791 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f7d2a4-ea30-4b87-9bbb-cb7b89193989" containerName="registry-server" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.970801 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7338dd1-5098-43d3-b2c1-9beb2ffc5885" containerName="pruner" Dec 06 03:09:49 crc kubenswrapper[4801]: I1206 03:09:49.971250 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.015221 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.035426 4801 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.036205 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80" gracePeriod=15 Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.036326 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa" gracePeriod=15 Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.036277 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce" gracePeriod=15 Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.036304 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec" gracePeriod=15 Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.036321 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44" gracePeriod=15 Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.036691 4801 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037019 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037042 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037058 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037067 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037077 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037085 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037094 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037101 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037114 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037124 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037135 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037142 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.037152 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037161 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037297 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037312 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037324 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037335 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037346 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.037359 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.098335 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.098382 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.098419 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.098958 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.098992 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.099026 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.099043 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.099058 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201126 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201204 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201290 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201311 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201328 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201378 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201398 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201449 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201562 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201610 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201661 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201685 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201705 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.201783 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.313073 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:09:50 crc kubenswrapper[4801]: W1206 03:09:50.332165 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-eab11b824ece43494f7b40b2bed18b41796d3036f37a4ce859e199ed69caf758 WatchSource:0}: Error finding container eab11b824ece43494f7b40b2bed18b41796d3036f37a4ce859e199ed69caf758: Status 404 returned error can't find the container with id eab11b824ece43494f7b40b2bed18b41796d3036f37a4ce859e199ed69caf758 Dec 06 03:09:50 crc kubenswrapper[4801]: E1206 03:09:50.335495 4801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e819730ceb08b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 03:09:50.334709899 +0000 UTC m=+243.457317471,LastTimestamp:2025-12-06 03:09:50.334709899 +0000 UTC m=+243.457317471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 03:09:50 crc kubenswrapper[4801]: I1206 03:09:50.797602 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"eab11b824ece43494f7b40b2bed18b41796d3036f37a4ce859e199ed69caf758"} Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.804041 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359"} Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.804982 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.806568 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.807684 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.808420 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec" exitCode=0 Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.808443 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa" exitCode=0 Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.808452 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44" exitCode=0 Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.808460 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce" exitCode=2 Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.808531 4801 scope.go:117] "RemoveContainer" containerID="b25319bc28ce3506310f687199ea71c4a07adaa7b6c0e68479023529db806cc3" Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.809880 4801 generic.go:334] "Generic (PLEG): container finished" podID="73230722-21f5-42a5-9ffb-8856120e8ecb" containerID="153446c23056948e5b03545c57be4c629c818bc5b011e7c28fd59a9ce41fca02" exitCode=0 Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.809912 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73230722-21f5-42a5-9ffb-8856120e8ecb","Type":"ContainerDied","Data":"153446c23056948e5b03545c57be4c629c818bc5b011e7c28fd59a9ce41fca02"} Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.815274 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:51 crc kubenswrapper[4801]: I1206 03:09:51.815927 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.404126 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.405073 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.405674 4801 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.405933 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.406168 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.529877 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.529929 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.529953 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.530048 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.530073 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.530098 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.530204 4801 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.530218 4801 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.530229 4801 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.724036 4801 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.724418 4801 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.724779 4801 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.725354 4801 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.725738 4801 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.725780 4801 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.726012 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="200ms" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.817638 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.818427 4801 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80" exitCode=0 Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.818515 4801 scope.go:117] "RemoveContainer" containerID="c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.818527 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.832446 4801 scope.go:117] "RemoveContainer" containerID="52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.837546 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.839584 4801 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.840773 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.846558 4801 scope.go:117] "RemoveContainer" containerID="e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.859501 4801 scope.go:117] "RemoveContainer" containerID="60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.875857 4801 scope.go:117] "RemoveContainer" containerID="d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.889287 4801 scope.go:117] "RemoveContainer" containerID="8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.909492 4801 scope.go:117] "RemoveContainer" containerID="c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.913130 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\": container with ID starting with c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec not found: ID does not exist" containerID="c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913167 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec"} err="failed to get container status \"c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\": rpc error: code = NotFound desc = could not find container \"c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec\": container with ID starting with c4cb63119f9f3e53a90aefbaf19979e62f9d0a9aee163deae96089ac9ec94aec not found: ID does not exist" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913195 4801 scope.go:117] "RemoveContainer" containerID="52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.913484 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\": container with ID starting with 52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa not found: ID does not exist" containerID="52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913506 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa"} err="failed to get container status \"52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\": rpc error: code = NotFound desc = could not find container \"52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa\": container with ID starting with 52e55d519cf364c79964956ea35a9379a949d948809c6d285fbd6f12fbf41baa not found: ID does not exist" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913523 4801 scope.go:117] "RemoveContainer" containerID="e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.913716 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\": container with ID starting with e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44 not found: ID does not exist" containerID="e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913738 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44"} err="failed to get container status \"e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\": rpc error: code = NotFound desc = could not find container \"e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44\": container with ID starting with e4314648a1ccff2499f3913ad1a188159ec9c9f3cf886381e90e4a25ea355e44 not found: ID does not exist" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913773 4801 scope.go:117] "RemoveContainer" containerID="60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.913944 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\": container with ID starting with 60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce not found: ID does not exist" containerID="60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913965 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce"} err="failed to get container status \"60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\": rpc error: code = NotFound desc = could not find container \"60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce\": container with ID starting with 60a4c73f41bd65d715f185a593b31a4c74ff4e627a1b56411c987b8ddb57a3ce not found: ID does not exist" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.913980 4801 scope.go:117] "RemoveContainer" containerID="d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.914155 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\": container with ID starting with d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80 not found: ID does not exist" containerID="d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.914175 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80"} err="failed to get container status \"d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\": rpc error: code = NotFound desc = could not find container \"d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80\": container with ID starting with d57f29dc4e3e48b18163e9130dbf5df8fea0476d3a6ec27c83c4a8ce0c2a0b80 not found: ID does not exist" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.914190 4801 scope.go:117] "RemoveContainer" containerID="8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.914353 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\": container with ID starting with 8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270 not found: ID does not exist" containerID="8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270" Dec 06 03:09:52 crc kubenswrapper[4801]: I1206 03:09:52.914373 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270"} err="failed to get container status \"8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\": rpc error: code = NotFound desc = could not find container \"8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270\": container with ID starting with 8a5fb24faa514176e84c90ba8fa9d62ec7dec7e21874271ebeab54cec62b9270 not found: ID does not exist" Dec 06 03:09:52 crc kubenswrapper[4801]: E1206 03:09:52.927322 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="400ms" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.051860 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.052665 4801 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.053163 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.053429 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.137865 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-kubelet-dir\") pod \"73230722-21f5-42a5-9ffb-8856120e8ecb\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.137952 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-var-lock\") pod \"73230722-21f5-42a5-9ffb-8856120e8ecb\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.137972 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73230722-21f5-42a5-9ffb-8856120e8ecb" (UID: "73230722-21f5-42a5-9ffb-8856120e8ecb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.137986 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73230722-21f5-42a5-9ffb-8856120e8ecb-kube-api-access\") pod \"73230722-21f5-42a5-9ffb-8856120e8ecb\" (UID: \"73230722-21f5-42a5-9ffb-8856120e8ecb\") " Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.138013 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-var-lock" (OuterVolumeSpecName: "var-lock") pod "73230722-21f5-42a5-9ffb-8856120e8ecb" (UID: "73230722-21f5-42a5-9ffb-8856120e8ecb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.138267 4801 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.138278 4801 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73230722-21f5-42a5-9ffb-8856120e8ecb-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.143816 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73230722-21f5-42a5-9ffb-8856120e8ecb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73230722-21f5-42a5-9ffb-8856120e8ecb" (UID: "73230722-21f5-42a5-9ffb-8856120e8ecb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.219104 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.239379 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73230722-21f5-42a5-9ffb-8856120e8ecb-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:53 crc kubenswrapper[4801]: E1206 03:09:53.328230 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="800ms" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.829890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73230722-21f5-42a5-9ffb-8856120e8ecb","Type":"ContainerDied","Data":"92912e24a6850aa86cf63f4e309011ac3c1c6a31bcf7603ea58553c7aba3432d"} Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.829936 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92912e24a6850aa86cf63f4e309011ac3c1c6a31bcf7603ea58553c7aba3432d" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.829991 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.834210 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:53 crc kubenswrapper[4801]: I1206 03:09:53.834431 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: E1206 03:09:54.129197 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="1.6s" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.365140 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" containerName="oauth-openshift" containerID="cri-o://8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3" gracePeriod=15 Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.717154 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.717734 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.718201 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.718637 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.759660 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-router-certs\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.759709 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-service-ca\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.759733 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-session\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.759769 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-dir\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.759925 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760515 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-idp-0-file-data\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760557 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-trusted-ca-bundle\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760583 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-error\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760619 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-cliconfig\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760667 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-login\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760690 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-serving-cert\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760719 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qdq6\" (UniqueName: \"kubernetes.io/projected/5b9771c2-4f3e-4c26-ad26-fa67911f1169-kube-api-access-8qdq6\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760741 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-provider-selection\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760789 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-ocp-branding-template\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760810 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-policies\") pod \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\" (UID: \"5b9771c2-4f3e-4c26-ad26-fa67911f1169\") " Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.760992 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.761311 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.761324 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.761364 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.761381 4801 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.761950 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.764673 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.765367 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.765462 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9771c2-4f3e-4c26-ad26-fa67911f1169-kube-api-access-8qdq6" (OuterVolumeSpecName: "kube-api-access-8qdq6") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "kube-api-access-8qdq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.765829 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.766049 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.766224 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.766653 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.767914 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.768052 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5b9771c2-4f3e-4c26-ad26-fa67911f1169" (UID: "5b9771c2-4f3e-4c26-ad26-fa67911f1169"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.834778 4801 generic.go:334] "Generic (PLEG): container finished" podID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" containerID="8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3" exitCode=0 Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.834823 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" event={"ID":"5b9771c2-4f3e-4c26-ad26-fa67911f1169","Type":"ContainerDied","Data":"8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3"} Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.834850 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" event={"ID":"5b9771c2-4f3e-4c26-ad26-fa67911f1169","Type":"ContainerDied","Data":"4932f68c1ccbb0f462a90b3b69768102bb12b525560e94dc089a84aa99c9fefa"} Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.834888 4801 scope.go:117] "RemoveContainer" containerID="8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.835635 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.836669 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.837060 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.837314 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.850208 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.850602 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.850849 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.850901 4801 scope.go:117] "RemoveContainer" containerID="8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3" Dec 06 03:09:54 crc kubenswrapper[4801]: E1206 03:09:54.851312 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3\": container with ID starting with 8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3 not found: ID does not exist" containerID="8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.851350 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3"} err="failed to get container status \"8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3\": rpc error: code = NotFound desc = could not find container \"8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3\": container with ID starting with 8e9178a367e00b2dd8f1e0998998794679dc16109f5346a37cc4acfb74b460e3 not found: ID does not exist" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862407 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862435 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862446 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862455 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862468 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qdq6\" (UniqueName: \"kubernetes.io/projected/5b9771c2-4f3e-4c26-ad26-fa67911f1169-kube-api-access-8qdq6\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862478 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862490 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862500 4801 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862508 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862518 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862527 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:54 crc kubenswrapper[4801]: I1206 03:09:54.862536 4801 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b9771c2-4f3e-4c26-ad26-fa67911f1169-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:09:55 crc kubenswrapper[4801]: E1206 03:09:55.463217 4801 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.17:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e819730ceb08b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 03:09:50.334709899 +0000 UTC m=+243.457317471,LastTimestamp:2025-12-06 03:09:50.334709899 +0000 UTC m=+243.457317471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 03:09:55 crc kubenswrapper[4801]: E1206 03:09:55.729692 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="3.2s" Dec 06 03:09:57 crc kubenswrapper[4801]: I1206 03:09:57.216307 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:57 crc kubenswrapper[4801]: I1206 03:09:57.216707 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:57 crc kubenswrapper[4801]: I1206 03:09:57.217200 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:09:58 crc kubenswrapper[4801]: E1206 03:09:58.930252 4801 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.17:6443: connect: connection refused" interval="6.4s" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.212499 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.214253 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.214947 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.215267 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.230417 4801 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.230743 4801 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:03 crc kubenswrapper[4801]: E1206 03:10:03.231374 4801 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.231954 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:03 crc kubenswrapper[4801]: W1206 03:10:03.265804 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-18e07cf5194470461daf6255fe7b4cd2b2c9a96533aeadd65635532ef31963f0 WatchSource:0}: Error finding container 18e07cf5194470461daf6255fe7b4cd2b2c9a96533aeadd65635532ef31963f0: Status 404 returned error can't find the container with id 18e07cf5194470461daf6255fe7b4cd2b2c9a96533aeadd65635532ef31963f0 Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.888305 4801 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="0ee626781051c5924703ef8d71ba2d997384f99dc4bb268d4f202c518e791429" exitCode=0 Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.888443 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"0ee626781051c5924703ef8d71ba2d997384f99dc4bb268d4f202c518e791429"} Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.889701 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"18e07cf5194470461daf6255fe7b4cd2b2c9a96533aeadd65635532ef31963f0"} Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.890366 4801 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.890429 4801 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.891234 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: E1206 03:10:03.891232 4801 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.891990 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.892658 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.895136 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.895236 4801 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc" exitCode=1 Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.895292 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc"} Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.896094 4801 scope.go:117] "RemoveContainer" containerID="3da0d2a9a132d551de44110fe44053d0d290022ba98c634922ad5e52984db9dc" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.896700 4801 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.897497 4801 status_manager.go:851] "Failed to get status for pod" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.898874 4801 status_manager.go:851] "Failed to get status for pod" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" pod="openshift-authentication/oauth-openshift-558db77b4-cqsjn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-cqsjn\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:03 crc kubenswrapper[4801]: I1206 03:10:03.899376 4801 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.17:6443: connect: connection refused" Dec 06 03:10:04 crc kubenswrapper[4801]: I1206 03:10:04.923873 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 03:10:04 crc kubenswrapper[4801]: I1206 03:10:04.925738 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca9b225bd11be5513c695dc5d5aa37bcbd70a952ba91e4fa2a7b8073518d3efe"} Dec 06 03:10:04 crc kubenswrapper[4801]: I1206 03:10:04.930714 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"43d577c002397d90bb092b631f8220925f67f237a94bff033a2ad9da84fcb01b"} Dec 06 03:10:04 crc kubenswrapper[4801]: I1206 03:10:04.930804 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b31b7f7745072ac01e783e1f1eff48988b8174f4629d8eee993b9d7cffa07c27"} Dec 06 03:10:04 crc kubenswrapper[4801]: I1206 03:10:04.930827 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a0509cdbf2a6079feb8aac66be1984a3f96b0497824fe71a2837819a011de708"} Dec 06 03:10:05 crc kubenswrapper[4801]: I1206 03:10:05.939985 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f48d921d9973d90bbb9066467cbfbc82fc2fad6746b992e14939bbf3df35d98d"} Dec 06 03:10:05 crc kubenswrapper[4801]: I1206 03:10:05.940887 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"23803154719afc0ea734408792f62d05edfecd35a289850d1b5c0ed12be3caad"} Dec 06 03:10:05 crc kubenswrapper[4801]: I1206 03:10:05.940719 4801 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:05 crc kubenswrapper[4801]: I1206 03:10:05.941026 4801 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:05 crc kubenswrapper[4801]: I1206 03:10:05.940986 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:08 crc kubenswrapper[4801]: I1206 03:10:08.232499 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:08 crc kubenswrapper[4801]: I1206 03:10:08.232921 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:08 crc kubenswrapper[4801]: I1206 03:10:08.238490 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:10 crc kubenswrapper[4801]: I1206 03:10:10.953071 4801 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.600075 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.606215 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.967054 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.976805 4801 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.976974 4801 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.982167 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:11 crc kubenswrapper[4801]: I1206 03:10:11.988152 4801 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="75aa4fcd-14dd-4574-9fa9-c46369615285" Dec 06 03:10:12 crc kubenswrapper[4801]: I1206 03:10:12.984941 4801 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:12 crc kubenswrapper[4801]: I1206 03:10:12.984998 4801 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8a57dd71-1caf-4193-9a92-2fd1f871832a" Dec 06 03:10:17 crc kubenswrapper[4801]: I1206 03:10:17.241305 4801 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="75aa4fcd-14dd-4574-9fa9-c46369615285" Dec 06 03:10:20 crc kubenswrapper[4801]: I1206 03:10:20.143016 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 03:10:20 crc kubenswrapper[4801]: I1206 03:10:20.950841 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 03:10:21 crc kubenswrapper[4801]: I1206 03:10:21.226801 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 03:10:21 crc kubenswrapper[4801]: I1206 03:10:21.816358 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 03:10:21 crc kubenswrapper[4801]: I1206 03:10:21.974406 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 03:10:22 crc kubenswrapper[4801]: I1206 03:10:22.301982 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 03:10:22 crc kubenswrapper[4801]: I1206 03:10:22.404341 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 03:10:22 crc kubenswrapper[4801]: I1206 03:10:22.658572 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 03:10:22 crc kubenswrapper[4801]: I1206 03:10:22.751792 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 03:10:22 crc kubenswrapper[4801]: I1206 03:10:22.882024 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.032927 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.245052 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.247968 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.368858 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.399994 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.410116 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.432675 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.505340 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.531382 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.658031 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.666801 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.678452 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.705414 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.763308 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.868136 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 03:10:23 crc kubenswrapper[4801]: I1206 03:10:23.889395 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.110555 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.197785 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.213622 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.245593 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.265257 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.392291 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.412412 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.451889 4801 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.468844 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.481391 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.499491 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.588091 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.640529 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.668285 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.692510 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.764883 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.783238 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.905367 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.916196 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.954474 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 03:10:24 crc kubenswrapper[4801]: I1206 03:10:24.971371 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.036853 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.051391 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.150993 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.170673 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.170796 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.225404 4801 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.235368 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.315559 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.361737 4801 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.482546 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.773935 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.848524 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.914824 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 03:10:25 crc kubenswrapper[4801]: I1206 03:10:25.941709 4801 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.228652 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.316926 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.614149 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.637116 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.644091 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.746370 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.851897 4801 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.861532 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.861496017 podStartE2EDuration="36.861496017s" podCreationTimestamp="2025-12-06 03:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:10:10.82132494 +0000 UTC m=+263.943932512" watchObservedRunningTime="2025-12-06 03:10:26.861496017 +0000 UTC m=+279.984103629" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.862678 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-cqsjn","openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.862794 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.862833 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.868296 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 03:10:26 crc kubenswrapper[4801]: I1206 03:10:26.885625 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=16.885602937 podStartE2EDuration="16.885602937s" podCreationTimestamp="2025-12-06 03:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:10:26.882840989 +0000 UTC m=+280.005448571" watchObservedRunningTime="2025-12-06 03:10:26.885602937 +0000 UTC m=+280.008210509" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.086010 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.116267 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.194708 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.224159 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" path="/var/lib/kubelet/pods/5b9771c2-4f3e-4c26-ad26-fa67911f1169/volumes" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.345105 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.358328 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.418108 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.456551 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.514572 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.532696 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.656306 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.831096 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.849067 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.884535 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htf5h"] Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.884968 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-htf5h" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="registry-server" containerID="cri-o://78ea3635cf2d146a6158feefab0fd03d04fdb84e4f81a15d8731f577300ed4fe" gracePeriod=2 Dec 06 03:10:27 crc kubenswrapper[4801]: I1206 03:10:27.992975 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.033736 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.078201 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.097785 4801 generic.go:334] "Generic (PLEG): container finished" podID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerID="78ea3635cf2d146a6158feefab0fd03d04fdb84e4f81a15d8731f577300ed4fe" exitCode=0 Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.097840 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerDied","Data":"78ea3635cf2d146a6158feefab0fd03d04fdb84e4f81a15d8731f577300ed4fe"} Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.100447 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.106872 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.137593 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.140928 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.154190 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.216837 4801 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.223135 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.286876 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.311794 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.411112 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.446003 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2qms\" (UniqueName: \"kubernetes.io/projected/83259a75-730d-4f15-8a2f-d8be13ec335a-kube-api-access-b2qms\") pod \"83259a75-730d-4f15-8a2f-d8be13ec335a\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.446100 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-catalog-content\") pod \"83259a75-730d-4f15-8a2f-d8be13ec335a\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.446153 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-utilities\") pod \"83259a75-730d-4f15-8a2f-d8be13ec335a\" (UID: \"83259a75-730d-4f15-8a2f-d8be13ec335a\") " Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.447047 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-utilities" (OuterVolumeSpecName: "utilities") pod "83259a75-730d-4f15-8a2f-d8be13ec335a" (UID: "83259a75-730d-4f15-8a2f-d8be13ec335a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.459854 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83259a75-730d-4f15-8a2f-d8be13ec335a-kube-api-access-b2qms" (OuterVolumeSpecName: "kube-api-access-b2qms") pod "83259a75-730d-4f15-8a2f-d8be13ec335a" (UID: "83259a75-730d-4f15-8a2f-d8be13ec335a"). InnerVolumeSpecName "kube-api-access-b2qms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.526138 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83259a75-730d-4f15-8a2f-d8be13ec335a" (UID: "83259a75-730d-4f15-8a2f-d8be13ec335a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.548157 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2qms\" (UniqueName: \"kubernetes.io/projected/83259a75-730d-4f15-8a2f-d8be13ec335a-kube-api-access-b2qms\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.548208 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.548217 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83259a75-730d-4f15-8a2f-d8be13ec335a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.607511 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.621715 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.656637 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.681340 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.696301 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.696714 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.709805 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.731053 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.747627 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 03:10:28 crc kubenswrapper[4801]: I1206 03:10:28.811363 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.023390 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.059572 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.066121 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.110910 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-htf5h" event={"ID":"83259a75-730d-4f15-8a2f-d8be13ec335a","Type":"ContainerDied","Data":"1e9738e65fc5f6bfeedb01283f80efb062afdf5d9f5732b82dc8d8010ef4b12d"} Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.111059 4801 scope.go:117] "RemoveContainer" containerID="78ea3635cf2d146a6158feefab0fd03d04fdb84e4f81a15d8731f577300ed4fe" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.110964 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-htf5h" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.112197 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.144401 4801 scope.go:117] "RemoveContainer" containerID="ca8a2fd52ca0ae4feffaa5233f821c38586d6ead6a2b5b769436cb9a047148bd" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.182437 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.197299 4801 scope.go:117] "RemoveContainer" containerID="f62517087778f1e164d26675f9b13bb7e29453e186ccfc9d4384321593ad5c48" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.211049 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-htf5h"] Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.228975 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-htf5h"] Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.289471 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.296010 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.359177 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.455327 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.493245 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.588549 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.855734 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69b55d54f6-8lwhp"] Dec 06 03:10:29 crc kubenswrapper[4801]: E1206 03:10:29.855971 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="extract-content" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.855986 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="extract-content" Dec 06 03:10:29 crc kubenswrapper[4801]: E1206 03:10:29.855999 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" containerName="installer" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856006 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" containerName="installer" Dec 06 03:10:29 crc kubenswrapper[4801]: E1206 03:10:29.856015 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="registry-server" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856021 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="registry-server" Dec 06 03:10:29 crc kubenswrapper[4801]: E1206 03:10:29.856030 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="extract-utilities" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856036 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="extract-utilities" Dec 06 03:10:29 crc kubenswrapper[4801]: E1206 03:10:29.856047 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" containerName="oauth-openshift" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856054 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" containerName="oauth-openshift" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856148 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="73230722-21f5-42a5-9ffb-8856120e8ecb" containerName="installer" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856199 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9771c2-4f3e-4c26-ad26-fa67911f1169" containerName="oauth-openshift" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856209 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" containerName="registry-server" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.856682 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.863552 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.864273 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.864361 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.864398 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.864421 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.864654 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.864679 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.865405 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.865489 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.865538 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.866187 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.866404 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.874567 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.877560 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.883116 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.917177 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971186 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-session\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971285 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87994278-100d-4c62-802e-6635aa1be16d-audit-dir\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971335 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971624 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-audit-policies\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971694 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9skz\" (UniqueName: \"kubernetes.io/projected/87994278-100d-4c62-802e-6635aa1be16d-kube-api-access-n9skz\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971813 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-login\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.971942 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972022 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972064 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972142 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972283 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972426 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972523 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-error\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:29 crc kubenswrapper[4801]: I1206 03:10:29.972617 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.008981 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.021434 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.033090 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.067405 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074473 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9skz\" (UniqueName: \"kubernetes.io/projected/87994278-100d-4c62-802e-6635aa1be16d-kube-api-access-n9skz\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074533 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-audit-policies\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074577 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-login\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074616 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074655 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074684 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074816 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074850 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-error\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074877 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074924 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-session\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074948 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87994278-100d-4c62-802e-6635aa1be16d-audit-dir\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.074976 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.075545 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/87994278-100d-4c62-802e-6635aa1be16d-audit-dir\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.077722 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-service-ca\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.077994 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-audit-policies\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.078150 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.078384 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.082564 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-login\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.082604 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.084369 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-error\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.085937 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.087152 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.088953 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.089607 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-router-certs\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.093139 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/87994278-100d-4c62-802e-6635aa1be16d-v4-0-config-system-session\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.098484 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9skz\" (UniqueName: \"kubernetes.io/projected/87994278-100d-4c62-802e-6635aa1be16d-kube-api-access-n9skz\") pod \"oauth-openshift-69b55d54f6-8lwhp\" (UID: \"87994278-100d-4c62-802e-6635aa1be16d\") " pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.125164 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.125245 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.177312 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.210161 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.214093 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.298976 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.338027 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.389515 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.410744 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.482083 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.492735 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.503650 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.506969 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.578459 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.611985 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.631597 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.716130 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.737806 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.777324 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.824782 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 03:10:30 crc kubenswrapper[4801]: I1206 03:10:30.959523 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.098310 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.180819 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.210976 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.218480 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83259a75-730d-4f15-8a2f-d8be13ec335a" path="/var/lib/kubelet/pods/83259a75-730d-4f15-8a2f-d8be13ec335a/volumes" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.265388 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.302167 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.356735 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.376285 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.378292 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.390116 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.392502 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.420423 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.537467 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.592074 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.669864 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.673867 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.734994 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.760067 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 03:10:31 crc kubenswrapper[4801]: I1206 03:10:31.834281 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.021872 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.045503 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.057800 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.068563 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.069703 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.112486 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.214412 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.232489 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.295971 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.433465 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.612531 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.618447 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.725042 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.804648 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.862907 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.917443 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.955988 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 03:10:32 crc kubenswrapper[4801]: I1206 03:10:32.977941 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.122915 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.142204 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.145875 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.168317 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.211304 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.221198 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.225430 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.260900 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.286093 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.414749 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.430217 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.439465 4801 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.439908 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359" gracePeriod=5 Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.449863 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.463774 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.563771 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.751589 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.847502 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.873614 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.945114 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.962349 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 03:10:33 crc kubenswrapper[4801]: I1206 03:10:33.987041 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.010067 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.115383 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.177428 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.259179 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.278609 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.374280 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.375239 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.442155 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.454832 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.686994 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.801715 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.914132 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.989929 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 03:10:34 crc kubenswrapper[4801]: I1206 03:10:34.991144 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.037706 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.244849 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.403176 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.409598 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.443129 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.789165 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.897742 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.918318 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 03:10:35 crc kubenswrapper[4801]: I1206 03:10:35.971571 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.074571 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.219123 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b55d54f6-8lwhp"] Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.287031 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.305019 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.430176 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.436398 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.502388 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.509307 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.516415 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69b55d54f6-8lwhp"] Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.927967 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 03:10:36 crc kubenswrapper[4801]: I1206 03:10:36.932273 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.122414 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.149232 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.170140 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" event={"ID":"87994278-100d-4c62-802e-6635aa1be16d","Type":"ContainerStarted","Data":"f29f5973fefad83da435a1bb9cb558aaec7783d08015fc51f5591adf816453a7"} Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.170227 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" event={"ID":"87994278-100d-4c62-802e-6635aa1be16d","Type":"ContainerStarted","Data":"2978134535fb9ca29bd1b8b1c470ff9a7852fe36a79f4e0903600d861e94cec2"} Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.170509 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.207128 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" podStartSLOduration=68.20709599 podStartE2EDuration="1m8.20709599s" podCreationTimestamp="2025-12-06 03:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:10:37.204198788 +0000 UTC m=+290.326806400" watchObservedRunningTime="2025-12-06 03:10:37.20709599 +0000 UTC m=+290.329703602" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.270907 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.273928 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.391336 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.396009 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 03:10:37 crc kubenswrapper[4801]: I1206 03:10:37.652848 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" Dec 06 03:10:38 crc kubenswrapper[4801]: I1206 03:10:38.065476 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 03:10:38 crc kubenswrapper[4801]: I1206 03:10:38.557269 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 03:10:38 crc kubenswrapper[4801]: I1206 03:10:38.587107 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.046494 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.047086 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.119665 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.159615 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.200455 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.200975 4801 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359" exitCode=137 Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.201124 4801 scope.go:117] "RemoveContainer" containerID="3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.201147 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.221395 4801 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.221964 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222053 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222107 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222232 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222275 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222223 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222277 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222340 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222646 4801 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222679 4801 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222700 4801 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.222715 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.236333 4801 scope.go:117] "RemoveContainer" containerID="3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359" Dec 06 03:10:39 crc kubenswrapper[4801]: E1206 03:10:39.237381 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359\": container with ID starting with 3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359 not found: ID does not exist" containerID="3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.237500 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359"} err="failed to get container status \"3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359\": rpc error: code = NotFound desc = could not find container \"3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359\": container with ID starting with 3fb6d1f26535a355060fdc581eb9acce9045a0251f9ddb3a9186139f1f969359 not found: ID does not exist" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.240676 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.291277 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.291582 4801 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="211b2f3b-e644-4082-8d38-0eefca94ecc2" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.291727 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.291822 4801 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="211b2f3b-e644-4082-8d38-0eefca94ecc2" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.325894 4801 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:39 crc kubenswrapper[4801]: I1206 03:10:39.325941 4801 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:41 crc kubenswrapper[4801]: I1206 03:10:41.244821 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 06 03:10:52 crc kubenswrapper[4801]: I1206 03:10:52.310981 4801 generic.go:334] "Generic (PLEG): container finished" podID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerID="829e91a399b6bd3f5e3303b7b6418aac772ec01a8ff729a47e368455b9acc789" exitCode=0 Dec 06 03:10:52 crc kubenswrapper[4801]: I1206 03:10:52.311147 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" event={"ID":"93dc3a8f-a772-4d28-89d6-3253b6c51aa3","Type":"ContainerDied","Data":"829e91a399b6bd3f5e3303b7b6418aac772ec01a8ff729a47e368455b9acc789"} Dec 06 03:10:52 crc kubenswrapper[4801]: I1206 03:10:52.312338 4801 scope.go:117] "RemoveContainer" containerID="829e91a399b6bd3f5e3303b7b6418aac772ec01a8ff729a47e368455b9acc789" Dec 06 03:10:53 crc kubenswrapper[4801]: I1206 03:10:53.325161 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" event={"ID":"93dc3a8f-a772-4d28-89d6-3253b6c51aa3","Type":"ContainerStarted","Data":"c80efe2323a55a72e0823f4eef02eb0feedbdbdfa89ceb18523384fb749451e8"} Dec 06 03:10:53 crc kubenswrapper[4801]: I1206 03:10:53.327097 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:10:53 crc kubenswrapper[4801]: I1206 03:10:53.329934 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:10:54 crc kubenswrapper[4801]: I1206 03:10:54.934839 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zscxm"] Dec 06 03:10:54 crc kubenswrapper[4801]: I1206 03:10:54.935703 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" podUID="349c2ebc-4077-42b4-b295-41d0a3a18e74" containerName="controller-manager" containerID="cri-o://63e39f06ae2cac27b0b40ebb5dfa2a9685c8892bc61a4074362b3d858bd78682" gracePeriod=30 Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.026009 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h"] Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.026577 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" podUID="48133237-eb56-4344-8fb4-8e61ce32bf37" containerName="route-controller-manager" containerID="cri-o://469b81a57efcfe247725587e3c3c3b9fb52d9dcec12a1f0288f04e73e12f8ffd" gracePeriod=30 Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.337503 4801 generic.go:334] "Generic (PLEG): container finished" podID="349c2ebc-4077-42b4-b295-41d0a3a18e74" containerID="63e39f06ae2cac27b0b40ebb5dfa2a9685c8892bc61a4074362b3d858bd78682" exitCode=0 Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.337808 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" event={"ID":"349c2ebc-4077-42b4-b295-41d0a3a18e74","Type":"ContainerDied","Data":"63e39f06ae2cac27b0b40ebb5dfa2a9685c8892bc61a4074362b3d858bd78682"} Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.339144 4801 generic.go:334] "Generic (PLEG): container finished" podID="48133237-eb56-4344-8fb4-8e61ce32bf37" containerID="469b81a57efcfe247725587e3c3c3b9fb52d9dcec12a1f0288f04e73e12f8ffd" exitCode=0 Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.339835 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" event={"ID":"48133237-eb56-4344-8fb4-8e61ce32bf37","Type":"ContainerDied","Data":"469b81a57efcfe247725587e3c3c3b9fb52d9dcec12a1f0288f04e73e12f8ffd"} Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.494288 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.498401 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677076 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48133237-eb56-4344-8fb4-8e61ce32bf37-serving-cert\") pod \"48133237-eb56-4344-8fb4-8e61ce32bf37\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677141 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-config\") pod \"48133237-eb56-4344-8fb4-8e61ce32bf37\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677192 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-config\") pod \"349c2ebc-4077-42b4-b295-41d0a3a18e74\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677219 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-client-ca\") pod \"349c2ebc-4077-42b4-b295-41d0a3a18e74\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677260 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnrgm\" (UniqueName: \"kubernetes.io/projected/48133237-eb56-4344-8fb4-8e61ce32bf37-kube-api-access-lnrgm\") pod \"48133237-eb56-4344-8fb4-8e61ce32bf37\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677306 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-proxy-ca-bundles\") pod \"349c2ebc-4077-42b4-b295-41d0a3a18e74\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677356 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-client-ca\") pod \"48133237-eb56-4344-8fb4-8e61ce32bf37\" (UID: \"48133237-eb56-4344-8fb4-8e61ce32bf37\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677401 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349c2ebc-4077-42b4-b295-41d0a3a18e74-serving-cert\") pod \"349c2ebc-4077-42b4-b295-41d0a3a18e74\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.677425 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhq9l\" (UniqueName: \"kubernetes.io/projected/349c2ebc-4077-42b4-b295-41d0a3a18e74-kube-api-access-bhq9l\") pod \"349c2ebc-4077-42b4-b295-41d0a3a18e74\" (UID: \"349c2ebc-4077-42b4-b295-41d0a3a18e74\") " Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.678352 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-client-ca" (OuterVolumeSpecName: "client-ca") pod "48133237-eb56-4344-8fb4-8e61ce32bf37" (UID: "48133237-eb56-4344-8fb4-8e61ce32bf37"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.678421 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-config" (OuterVolumeSpecName: "config") pod "48133237-eb56-4344-8fb4-8e61ce32bf37" (UID: "48133237-eb56-4344-8fb4-8e61ce32bf37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.678466 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-client-ca" (OuterVolumeSpecName: "client-ca") pod "349c2ebc-4077-42b4-b295-41d0a3a18e74" (UID: "349c2ebc-4077-42b4-b295-41d0a3a18e74"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.678478 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "349c2ebc-4077-42b4-b295-41d0a3a18e74" (UID: "349c2ebc-4077-42b4-b295-41d0a3a18e74"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.679004 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-config" (OuterVolumeSpecName: "config") pod "349c2ebc-4077-42b4-b295-41d0a3a18e74" (UID: "349c2ebc-4077-42b4-b295-41d0a3a18e74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.684488 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48133237-eb56-4344-8fb4-8e61ce32bf37-kube-api-access-lnrgm" (OuterVolumeSpecName: "kube-api-access-lnrgm") pod "48133237-eb56-4344-8fb4-8e61ce32bf37" (UID: "48133237-eb56-4344-8fb4-8e61ce32bf37"). InnerVolumeSpecName "kube-api-access-lnrgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.684522 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48133237-eb56-4344-8fb4-8e61ce32bf37-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "48133237-eb56-4344-8fb4-8e61ce32bf37" (UID: "48133237-eb56-4344-8fb4-8e61ce32bf37"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.684650 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349c2ebc-4077-42b4-b295-41d0a3a18e74-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "349c2ebc-4077-42b4-b295-41d0a3a18e74" (UID: "349c2ebc-4077-42b4-b295-41d0a3a18e74"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.700868 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349c2ebc-4077-42b4-b295-41d0a3a18e74-kube-api-access-bhq9l" (OuterVolumeSpecName: "kube-api-access-bhq9l") pod "349c2ebc-4077-42b4-b295-41d0a3a18e74" (UID: "349c2ebc-4077-42b4-b295-41d0a3a18e74"). InnerVolumeSpecName "kube-api-access-bhq9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778814 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48133237-eb56-4344-8fb4-8e61ce32bf37-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778850 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778861 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778875 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778886 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnrgm\" (UniqueName: \"kubernetes.io/projected/48133237-eb56-4344-8fb4-8e61ce32bf37-kube-api-access-lnrgm\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778898 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/349c2ebc-4077-42b4-b295-41d0a3a18e74-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778907 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48133237-eb56-4344-8fb4-8e61ce32bf37-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778916 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/349c2ebc-4077-42b4-b295-41d0a3a18e74-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:55 crc kubenswrapper[4801]: I1206 03:10:55.778927 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhq9l\" (UniqueName: \"kubernetes.io/projected/349c2ebc-4077-42b4-b295-41d0a3a18e74-kube-api-access-bhq9l\") on node \"crc\" DevicePath \"\"" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.348733 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" event={"ID":"48133237-eb56-4344-8fb4-8e61ce32bf37","Type":"ContainerDied","Data":"a28a9e456318ca5060a1ab0a5f421fffb30ab9c25ef296b1ae69a01c5e3b33eb"} Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.348831 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.349520 4801 scope.go:117] "RemoveContainer" containerID="469b81a57efcfe247725587e3c3c3b9fb52d9dcec12a1f0288f04e73e12f8ffd" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.351360 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" event={"ID":"349c2ebc-4077-42b4-b295-41d0a3a18e74","Type":"ContainerDied","Data":"fcb4455830739c34f9e4c260a882f345846e0a23769dc4aade2fe16baecf1902"} Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.351497 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zscxm" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.377627 4801 scope.go:117] "RemoveContainer" containerID="63e39f06ae2cac27b0b40ebb5dfa2a9685c8892bc61a4074362b3d858bd78682" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.391788 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.399509 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5m6h"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.411421 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zscxm"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.416991 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zscxm"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.836170 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b66b4cdf7-r66lx"] Dec 06 03:10:56 crc kubenswrapper[4801]: E1206 03:10:56.836585 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48133237-eb56-4344-8fb4-8e61ce32bf37" containerName="route-controller-manager" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.836618 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="48133237-eb56-4344-8fb4-8e61ce32bf37" containerName="route-controller-manager" Dec 06 03:10:56 crc kubenswrapper[4801]: E1206 03:10:56.836659 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.836672 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 03:10:56 crc kubenswrapper[4801]: E1206 03:10:56.836696 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349c2ebc-4077-42b4-b295-41d0a3a18e74" containerName="controller-manager" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.836709 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="349c2ebc-4077-42b4-b295-41d0a3a18e74" containerName="controller-manager" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.836917 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="48133237-eb56-4344-8fb4-8e61ce32bf37" containerName="route-controller-manager" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.836962 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="349c2ebc-4077-42b4-b295-41d0a3a18e74" containerName="controller-manager" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.837315 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.838008 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.852801 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.853380 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.857675 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.858103 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.858254 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.858364 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.864411 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.879734 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.881601 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b66b4cdf7-r66lx"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.881965 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.885488 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.885668 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.886234 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.886453 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.886644 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.886948 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.897253 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r"] Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900316 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-client-ca\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a93ea2-1b79-4141-aacc-336e9485eb79-serving-cert\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900394 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-config\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900429 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586h4\" (UniqueName: \"kubernetes.io/projected/13a93ea2-1b79-4141-aacc-336e9485eb79-kube-api-access-586h4\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900472 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-config\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900494 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-client-ca\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900517 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500b37c0-d751-4d6e-aec0-9334b1c7169f-serving-cert\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-proxy-ca-bundles\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:56 crc kubenswrapper[4801]: I1206 03:10:56.900570 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8hfj\" (UniqueName: \"kubernetes.io/projected/500b37c0-d751-4d6e-aec0-9334b1c7169f-kube-api-access-x8hfj\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001435 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-client-ca\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001502 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a93ea2-1b79-4141-aacc-336e9485eb79-serving-cert\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001536 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-config\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001565 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586h4\" (UniqueName: \"kubernetes.io/projected/13a93ea2-1b79-4141-aacc-336e9485eb79-kube-api-access-586h4\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001608 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-config\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001626 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-client-ca\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001648 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500b37c0-d751-4d6e-aec0-9334b1c7169f-serving-cert\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001679 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-proxy-ca-bundles\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.001714 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8hfj\" (UniqueName: \"kubernetes.io/projected/500b37c0-d751-4d6e-aec0-9334b1c7169f-kube-api-access-x8hfj\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.003381 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-client-ca\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.003501 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-client-ca\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.004677 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-config\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.005159 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-config\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.005826 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-proxy-ca-bundles\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.011889 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500b37c0-d751-4d6e-aec0-9334b1c7169f-serving-cert\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.019896 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a93ea2-1b79-4141-aacc-336e9485eb79-serving-cert\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.029684 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586h4\" (UniqueName: \"kubernetes.io/projected/13a93ea2-1b79-4141-aacc-336e9485eb79-kube-api-access-586h4\") pod \"controller-manager-b66b4cdf7-r66lx\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.039162 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8hfj\" (UniqueName: \"kubernetes.io/projected/500b37c0-d751-4d6e-aec0-9334b1c7169f-kube-api-access-x8hfj\") pod \"route-controller-manager-d9b47fcd5-fq62r\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.212564 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.221011 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349c2ebc-4077-42b4-b295-41d0a3a18e74" path="/var/lib/kubelet/pods/349c2ebc-4077-42b4-b295-41d0a3a18e74/volumes" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.221540 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48133237-eb56-4344-8fb4-8e61ce32bf37" path="/var/lib/kubelet/pods/48133237-eb56-4344-8fb4-8e61ce32bf37/volumes" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.222072 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.585746 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r"] Dec 06 03:10:57 crc kubenswrapper[4801]: W1206 03:10:57.589996 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod500b37c0_d751_4d6e_aec0_9334b1c7169f.slice/crio-dce9b681b9572a3b88bc37f5595c88d7e0c830e64ca8382c471226bed9dfce1d WatchSource:0}: Error finding container dce9b681b9572a3b88bc37f5595c88d7e0c830e64ca8382c471226bed9dfce1d: Status 404 returned error can't find the container with id dce9b681b9572a3b88bc37f5595c88d7e0c830e64ca8382c471226bed9dfce1d Dec 06 03:10:57 crc kubenswrapper[4801]: I1206 03:10:57.711890 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b66b4cdf7-r66lx"] Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.373559 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" event={"ID":"13a93ea2-1b79-4141-aacc-336e9485eb79","Type":"ContainerStarted","Data":"40f3c2dc4558be5f970582be886fb59da9b9572413102db7cb0ee26e5380c71a"} Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.373991 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" event={"ID":"13a93ea2-1b79-4141-aacc-336e9485eb79","Type":"ContainerStarted","Data":"5e584130ffcdba5f367eb1283b812322f6424033771ed862e0271689f15fcc9d"} Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.374481 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.376685 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" event={"ID":"500b37c0-d751-4d6e-aec0-9334b1c7169f","Type":"ContainerStarted","Data":"2dbfa80bb8ea78046b406575b621ae0a270d4cada2fdfcd8d201c03e73e432ce"} Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.376727 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" event={"ID":"500b37c0-d751-4d6e-aec0-9334b1c7169f","Type":"ContainerStarted","Data":"dce9b681b9572a3b88bc37f5595c88d7e0c830e64ca8382c471226bed9dfce1d"} Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.377093 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.379219 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.399223 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" podStartSLOduration=3.399206533 podStartE2EDuration="3.399206533s" podCreationTimestamp="2025-12-06 03:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:10:58.396391494 +0000 UTC m=+311.518999066" watchObservedRunningTime="2025-12-06 03:10:58.399206533 +0000 UTC m=+311.521814105" Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.418688 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" podStartSLOduration=3.418670283 podStartE2EDuration="3.418670283s" podCreationTimestamp="2025-12-06 03:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:10:58.41573018 +0000 UTC m=+311.538337752" watchObservedRunningTime="2025-12-06 03:10:58.418670283 +0000 UTC m=+311.541277855" Dec 06 03:10:58 crc kubenswrapper[4801]: I1206 03:10:58.646721 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:11:14 crc kubenswrapper[4801]: I1206 03:11:14.965620 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b66b4cdf7-r66lx"] Dec 06 03:11:14 crc kubenswrapper[4801]: I1206 03:11:14.967080 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" podUID="13a93ea2-1b79-4141-aacc-336e9485eb79" containerName="controller-manager" containerID="cri-o://40f3c2dc4558be5f970582be886fb59da9b9572413102db7cb0ee26e5380c71a" gracePeriod=30 Dec 06 03:11:14 crc kubenswrapper[4801]: I1206 03:11:14.997516 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r"] Dec 06 03:11:14 crc kubenswrapper[4801]: I1206 03:11:14.998654 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" podUID="500b37c0-d751-4d6e-aec0-9334b1c7169f" containerName="route-controller-manager" containerID="cri-o://2dbfa80bb8ea78046b406575b621ae0a270d4cada2fdfcd8d201c03e73e432ce" gracePeriod=30 Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.495627 4801 generic.go:334] "Generic (PLEG): container finished" podID="500b37c0-d751-4d6e-aec0-9334b1c7169f" containerID="2dbfa80bb8ea78046b406575b621ae0a270d4cada2fdfcd8d201c03e73e432ce" exitCode=0 Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.495736 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" event={"ID":"500b37c0-d751-4d6e-aec0-9334b1c7169f","Type":"ContainerDied","Data":"2dbfa80bb8ea78046b406575b621ae0a270d4cada2fdfcd8d201c03e73e432ce"} Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.497518 4801 generic.go:334] "Generic (PLEG): container finished" podID="13a93ea2-1b79-4141-aacc-336e9485eb79" containerID="40f3c2dc4558be5f970582be886fb59da9b9572413102db7cb0ee26e5380c71a" exitCode=0 Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.497551 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" event={"ID":"13a93ea2-1b79-4141-aacc-336e9485eb79","Type":"ContainerDied","Data":"40f3c2dc4558be5f970582be886fb59da9b9572413102db7cb0ee26e5380c71a"} Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.563254 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.603160 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-config\") pod \"500b37c0-d751-4d6e-aec0-9334b1c7169f\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.603251 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8hfj\" (UniqueName: \"kubernetes.io/projected/500b37c0-d751-4d6e-aec0-9334b1c7169f-kube-api-access-x8hfj\") pod \"500b37c0-d751-4d6e-aec0-9334b1c7169f\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.603435 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-client-ca\") pod \"500b37c0-d751-4d6e-aec0-9334b1c7169f\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.603481 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500b37c0-d751-4d6e-aec0-9334b1c7169f-serving-cert\") pod \"500b37c0-d751-4d6e-aec0-9334b1c7169f\" (UID: \"500b37c0-d751-4d6e-aec0-9334b1c7169f\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.604697 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-client-ca" (OuterVolumeSpecName: "client-ca") pod "500b37c0-d751-4d6e-aec0-9334b1c7169f" (UID: "500b37c0-d751-4d6e-aec0-9334b1c7169f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.604878 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-config" (OuterVolumeSpecName: "config") pod "500b37c0-d751-4d6e-aec0-9334b1c7169f" (UID: "500b37c0-d751-4d6e-aec0-9334b1c7169f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.610018 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500b37c0-d751-4d6e-aec0-9334b1c7169f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "500b37c0-d751-4d6e-aec0-9334b1c7169f" (UID: "500b37c0-d751-4d6e-aec0-9334b1c7169f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.611298 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500b37c0-d751-4d6e-aec0-9334b1c7169f-kube-api-access-x8hfj" (OuterVolumeSpecName: "kube-api-access-x8hfj") pod "500b37c0-d751-4d6e-aec0-9334b1c7169f" (UID: "500b37c0-d751-4d6e-aec0-9334b1c7169f"). InnerVolumeSpecName "kube-api-access-x8hfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.635266 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.705565 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-client-ca\") pod \"13a93ea2-1b79-4141-aacc-336e9485eb79\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.705920 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586h4\" (UniqueName: \"kubernetes.io/projected/13a93ea2-1b79-4141-aacc-336e9485eb79-kube-api-access-586h4\") pod \"13a93ea2-1b79-4141-aacc-336e9485eb79\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.706145 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a93ea2-1b79-4141-aacc-336e9485eb79-serving-cert\") pod \"13a93ea2-1b79-4141-aacc-336e9485eb79\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.706219 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-proxy-ca-bundles\") pod \"13a93ea2-1b79-4141-aacc-336e9485eb79\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.706321 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-config\") pod \"13a93ea2-1b79-4141-aacc-336e9485eb79\" (UID: \"13a93ea2-1b79-4141-aacc-336e9485eb79\") " Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.706972 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.707206 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/500b37c0-d751-4d6e-aec0-9334b1c7169f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.707236 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/500b37c0-d751-4d6e-aec0-9334b1c7169f-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.707280 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8hfj\" (UniqueName: \"kubernetes.io/projected/500b37c0-d751-4d6e-aec0-9334b1c7169f-kube-api-access-x8hfj\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.707330 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-client-ca" (OuterVolumeSpecName: "client-ca") pod "13a93ea2-1b79-4141-aacc-336e9485eb79" (UID: "13a93ea2-1b79-4141-aacc-336e9485eb79"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.707564 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "13a93ea2-1b79-4141-aacc-336e9485eb79" (UID: "13a93ea2-1b79-4141-aacc-336e9485eb79"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.707592 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-config" (OuterVolumeSpecName: "config") pod "13a93ea2-1b79-4141-aacc-336e9485eb79" (UID: "13a93ea2-1b79-4141-aacc-336e9485eb79"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.711023 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a93ea2-1b79-4141-aacc-336e9485eb79-kube-api-access-586h4" (OuterVolumeSpecName: "kube-api-access-586h4") pod "13a93ea2-1b79-4141-aacc-336e9485eb79" (UID: "13a93ea2-1b79-4141-aacc-336e9485eb79"). InnerVolumeSpecName "kube-api-access-586h4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.711148 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13a93ea2-1b79-4141-aacc-336e9485eb79-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "13a93ea2-1b79-4141-aacc-336e9485eb79" (UID: "13a93ea2-1b79-4141-aacc-336e9485eb79"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.809059 4801 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.809120 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586h4\" (UniqueName: \"kubernetes.io/projected/13a93ea2-1b79-4141-aacc-336e9485eb79-kube-api-access-586h4\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.809140 4801 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a93ea2-1b79-4141-aacc-336e9485eb79-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.809153 4801 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:15 crc kubenswrapper[4801]: I1206 03:11:15.809167 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a93ea2-1b79-4141-aacc-336e9485eb79-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.098009 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz"] Dec 06 03:11:16 crc kubenswrapper[4801]: E1206 03:11:16.098625 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a93ea2-1b79-4141-aacc-336e9485eb79" containerName="controller-manager" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.098660 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a93ea2-1b79-4141-aacc-336e9485eb79" containerName="controller-manager" Dec 06 03:11:16 crc kubenswrapper[4801]: E1206 03:11:16.098698 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500b37c0-d751-4d6e-aec0-9334b1c7169f" containerName="route-controller-manager" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.098719 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="500b37c0-d751-4d6e-aec0-9334b1c7169f" containerName="route-controller-manager" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.099052 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a93ea2-1b79-4141-aacc-336e9485eb79" containerName="controller-manager" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.099087 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="500b37c0-d751-4d6e-aec0-9334b1c7169f" containerName="route-controller-manager" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.100089 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.106147 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-58ddb96588-w4bb8"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.107258 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.122253 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58ddb96588-w4bb8"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.129823 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.214993 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zb5k\" (UniqueName: \"kubernetes.io/projected/7834c771-55d0-4da3-9d45-a48e01403463-kube-api-access-8zb5k\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215069 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7834c771-55d0-4da3-9d45-a48e01403463-client-ca\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215107 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30de1f7b-1348-4f12-9952-f639cd0f3e2f-serving-cert\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215126 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7834c771-55d0-4da3-9d45-a48e01403463-config\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215152 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-proxy-ca-bundles\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7834c771-55d0-4da3-9d45-a48e01403463-serving-cert\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215210 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-client-ca\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215251 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-config\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.215274 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6njw\" (UniqueName: \"kubernetes.io/projected/30de1f7b-1348-4f12-9952-f639cd0f3e2f-kube-api-access-g6njw\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.318905 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-config\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.318984 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6njw\" (UniqueName: \"kubernetes.io/projected/30de1f7b-1348-4f12-9952-f639cd0f3e2f-kube-api-access-g6njw\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319060 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zb5k\" (UniqueName: \"kubernetes.io/projected/7834c771-55d0-4da3-9d45-a48e01403463-kube-api-access-8zb5k\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319189 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7834c771-55d0-4da3-9d45-a48e01403463-client-ca\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319278 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30de1f7b-1348-4f12-9952-f639cd0f3e2f-serving-cert\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319313 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7834c771-55d0-4da3-9d45-a48e01403463-config\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319383 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-proxy-ca-bundles\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319430 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7834c771-55d0-4da3-9d45-a48e01403463-serving-cert\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.319514 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-client-ca\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.321673 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-config\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.321951 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7834c771-55d0-4da3-9d45-a48e01403463-config\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.322992 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7834c771-55d0-4da3-9d45-a48e01403463-client-ca\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.325273 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-client-ca\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.327403 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/30de1f7b-1348-4f12-9952-f639cd0f3e2f-proxy-ca-bundles\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.330397 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7834c771-55d0-4da3-9d45-a48e01403463-serving-cert\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.330408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30de1f7b-1348-4f12-9952-f639cd0f3e2f-serving-cert\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.344216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zb5k\" (UniqueName: \"kubernetes.io/projected/7834c771-55d0-4da3-9d45-a48e01403463-kube-api-access-8zb5k\") pod \"route-controller-manager-64f8858df9-hbqzz\" (UID: \"7834c771-55d0-4da3-9d45-a48e01403463\") " pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.350251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6njw\" (UniqueName: \"kubernetes.io/projected/30de1f7b-1348-4f12-9952-f639cd0f3e2f-kube-api-access-g6njw\") pod \"controller-manager-58ddb96588-w4bb8\" (UID: \"30de1f7b-1348-4f12-9952-f639cd0f3e2f\") " pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.436680 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.458032 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.513798 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" event={"ID":"500b37c0-d751-4d6e-aec0-9334b1c7169f","Type":"ContainerDied","Data":"dce9b681b9572a3b88bc37f5595c88d7e0c830e64ca8382c471226bed9dfce1d"} Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.513860 4801 scope.go:117] "RemoveContainer" containerID="2dbfa80bb8ea78046b406575b621ae0a270d4cada2fdfcd8d201c03e73e432ce" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.514017 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.523027 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" event={"ID":"13a93ea2-1b79-4141-aacc-336e9485eb79","Type":"ContainerDied","Data":"5e584130ffcdba5f367eb1283b812322f6424033771ed862e0271689f15fcc9d"} Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.523390 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b66b4cdf7-r66lx" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.549242 4801 scope.go:117] "RemoveContainer" containerID="40f3c2dc4558be5f970582be886fb59da9b9572413102db7cb0ee26e5380c71a" Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.574910 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.579668 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d9b47fcd5-fq62r"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.589383 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b66b4cdf7-r66lx"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.595413 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b66b4cdf7-r66lx"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.777534 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-58ddb96588-w4bb8"] Dec 06 03:11:16 crc kubenswrapper[4801]: I1206 03:11:16.934558 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz"] Dec 06 03:11:16 crc kubenswrapper[4801]: W1206 03:11:16.941849 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7834c771_55d0_4da3_9d45_a48e01403463.slice/crio-3cd30fee276b72e98529de3bdac1a5da0da88ecd7085bd50a3a558aae0ccb6bf WatchSource:0}: Error finding container 3cd30fee276b72e98529de3bdac1a5da0da88ecd7085bd50a3a558aae0ccb6bf: Status 404 returned error can't find the container with id 3cd30fee276b72e98529de3bdac1a5da0da88ecd7085bd50a3a558aae0ccb6bf Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.219235 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a93ea2-1b79-4141-aacc-336e9485eb79" path="/var/lib/kubelet/pods/13a93ea2-1b79-4141-aacc-336e9485eb79/volumes" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.220356 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500b37c0-d751-4d6e-aec0-9334b1c7169f" path="/var/lib/kubelet/pods/500b37c0-d751-4d6e-aec0-9334b1c7169f/volumes" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.534608 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" event={"ID":"7834c771-55d0-4da3-9d45-a48e01403463","Type":"ContainerStarted","Data":"57deba389e6b358146e216f88e2f6fc0403b1b1a02c2ac5cc918958ed143f130"} Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.534713 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.534744 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" event={"ID":"7834c771-55d0-4da3-9d45-a48e01403463","Type":"ContainerStarted","Data":"3cd30fee276b72e98529de3bdac1a5da0da88ecd7085bd50a3a558aae0ccb6bf"} Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.538462 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" event={"ID":"30de1f7b-1348-4f12-9952-f639cd0f3e2f","Type":"ContainerStarted","Data":"13d99ed7d6cb7f5bdc5b50f370efd052abebf402aa31fd4542a1917e86b0e780"} Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.538517 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" event={"ID":"30de1f7b-1348-4f12-9952-f639cd0f3e2f","Type":"ContainerStarted","Data":"4ecf7e7eb45340b911c2d95c6ea6112f5917c7038e122ed76730aad0c8a6eb22"} Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.538612 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.543421 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.552090 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.564694 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" podStartSLOduration=2.564668821 podStartE2EDuration="2.564668821s" podCreationTimestamp="2025-12-06 03:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:11:17.561622773 +0000 UTC m=+330.684230355" watchObservedRunningTime="2025-12-06 03:11:17.564668821 +0000 UTC m=+330.687276393" Dec 06 03:11:17 crc kubenswrapper[4801]: I1206 03:11:17.588644 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" podStartSLOduration=3.588614553 podStartE2EDuration="3.588614553s" podCreationTimestamp="2025-12-06 03:11:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:11:17.586497812 +0000 UTC m=+330.709105394" watchObservedRunningTime="2025-12-06 03:11:17.588614553 +0000 UTC m=+330.711222135" Dec 06 03:11:41 crc kubenswrapper[4801]: I1206 03:11:41.170185 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:11:41 crc kubenswrapper[4801]: I1206 03:11:41.170942 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.496188 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tn2s5"] Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.498197 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.520316 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tn2s5"] Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.583856 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-bound-sa-token\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.583932 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d2c5b141-4472-4e19-9984-fa8facdd6b84-registry-certificates\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.583975 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rkxf\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-kube-api-access-6rkxf\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.584003 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d2c5b141-4472-4e19-9984-fa8facdd6b84-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.584081 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c5b141-4472-4e19-9984-fa8facdd6b84-trusted-ca\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.584183 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d2c5b141-4472-4e19-9984-fa8facdd6b84-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.584259 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.584323 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-registry-tls\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.606484 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687280 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rkxf\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-kube-api-access-6rkxf\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687356 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d2c5b141-4472-4e19-9984-fa8facdd6b84-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687402 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c5b141-4472-4e19-9984-fa8facdd6b84-trusted-ca\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687430 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d2c5b141-4472-4e19-9984-fa8facdd6b84-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687464 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-registry-tls\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687493 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-bound-sa-token\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.687514 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d2c5b141-4472-4e19-9984-fa8facdd6b84-registry-certificates\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.688314 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d2c5b141-4472-4e19-9984-fa8facdd6b84-ca-trust-extracted\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.689136 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d2c5b141-4472-4e19-9984-fa8facdd6b84-registry-certificates\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.689333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2c5b141-4472-4e19-9984-fa8facdd6b84-trusted-ca\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.700088 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d2c5b141-4472-4e19-9984-fa8facdd6b84-installation-pull-secrets\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.700569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-registry-tls\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.703000 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-bound-sa-token\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.708692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rkxf\" (UniqueName: \"kubernetes.io/projected/d2c5b141-4472-4e19-9984-fa8facdd6b84-kube-api-access-6rkxf\") pod \"image-registry-66df7c8f76-tn2s5\" (UID: \"d2c5b141-4472-4e19-9984-fa8facdd6b84\") " pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:49 crc kubenswrapper[4801]: I1206 03:11:49.817952 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:50 crc kubenswrapper[4801]: I1206 03:11:50.308980 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-tn2s5"] Dec 06 03:11:50 crc kubenswrapper[4801]: W1206 03:11:50.320071 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2c5b141_4472_4e19_9984_fa8facdd6b84.slice/crio-385b8a756d89dcd1453da9a776ed5ec5cf7a380e4cc91e348108f260b05f5479 WatchSource:0}: Error finding container 385b8a756d89dcd1453da9a776ed5ec5cf7a380e4cc91e348108f260b05f5479: Status 404 returned error can't find the container with id 385b8a756d89dcd1453da9a776ed5ec5cf7a380e4cc91e348108f260b05f5479 Dec 06 03:11:50 crc kubenswrapper[4801]: I1206 03:11:50.765059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" event={"ID":"d2c5b141-4472-4e19-9984-fa8facdd6b84","Type":"ContainerStarted","Data":"5e4a5bf1569985641056a75a32d3541be11167755bd3ef9e6d1ab5e141d59f22"} Dec 06 03:11:50 crc kubenswrapper[4801]: I1206 03:11:50.765134 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" event={"ID":"d2c5b141-4472-4e19-9984-fa8facdd6b84","Type":"ContainerStarted","Data":"385b8a756d89dcd1453da9a776ed5ec5cf7a380e4cc91e348108f260b05f5479"} Dec 06 03:11:50 crc kubenswrapper[4801]: I1206 03:11:50.766072 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:11:50 crc kubenswrapper[4801]: I1206 03:11:50.795390 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" podStartSLOduration=1.7953712899999998 podStartE2EDuration="1.79537129s" podCreationTimestamp="2025-12-06 03:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:11:50.7936177 +0000 UTC m=+363.916225312" watchObservedRunningTime="2025-12-06 03:11:50.79537129 +0000 UTC m=+363.917978862" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.000771 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fn52d"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.001975 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fn52d" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="registry-server" containerID="cri-o://68ea112c77ed408c1b68b0c04ad1b04fb1233f53a3ff189033ea773d2f4f943e" gracePeriod=30 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.005658 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l77v9"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.006018 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l77v9" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="registry-server" containerID="cri-o://00046d95d661d68549ce20c410f2a2e754726746ffe22ae466de5b102815bde5" gracePeriod=30 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.020729 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h498b"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.023200 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" containerID="cri-o://c80efe2323a55a72e0823f4eef02eb0feedbdbdfa89ceb18523384fb749451e8" gracePeriod=30 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.035478 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffnmp"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.035736 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ffnmp" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="registry-server" containerID="cri-o://4356a6dc5053a7dcd2ecfcb26ed1c4a68856b99aeb6af91589873e1e78947608" gracePeriod=30 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.048064 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8t2r5"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.048336 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8t2r5" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="registry-server" containerID="cri-o://94a94c67049b7d911caa380a3a59c9252c9b09118108fbff4855adb87f9de09e" gracePeriod=30 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.062460 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpzsr"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.072684 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpzsr"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.072825 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.214744 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qfj9\" (UniqueName: \"kubernetes.io/projected/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-kube-api-access-6qfj9\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.214837 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.214860 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.316781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qfj9\" (UniqueName: \"kubernetes.io/projected/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-kube-api-access-6qfj9\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.316854 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.316886 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.318581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.324685 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.334076 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qfj9\" (UniqueName: \"kubernetes.io/projected/f81f28d8-ea1d-4089-bb9c-cbe684ad3044-kube-api-access-6qfj9\") pod \"marketplace-operator-79b997595-rpzsr\" (UID: \"f81f28d8-ea1d-4089-bb9c-cbe684ad3044\") " pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.391647 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.812367 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rpzsr"] Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.817867 4801 generic.go:334] "Generic (PLEG): container finished" podID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerID="c80efe2323a55a72e0823f4eef02eb0feedbdbdfa89ceb18523384fb749451e8" exitCode=0 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.817956 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" event={"ID":"93dc3a8f-a772-4d28-89d6-3253b6c51aa3","Type":"ContainerDied","Data":"c80efe2323a55a72e0823f4eef02eb0feedbdbdfa89ceb18523384fb749451e8"} Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.818003 4801 scope.go:117] "RemoveContainer" containerID="829e91a399b6bd3f5e3303b7b6418aac772ec01a8ff729a47e368455b9acc789" Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.824237 4801 generic.go:334] "Generic (PLEG): container finished" podID="98beccef-be81-4934-b000-a41b741ed810" containerID="00046d95d661d68549ce20c410f2a2e754726746ffe22ae466de5b102815bde5" exitCode=0 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.824319 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerDied","Data":"00046d95d661d68549ce20c410f2a2e754726746ffe22ae466de5b102815bde5"} Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.826000 4801 generic.go:334] "Generic (PLEG): container finished" podID="73474c40-4e21-4384-94be-94d4015e7668" containerID="94a94c67049b7d911caa380a3a59c9252c9b09118108fbff4855adb87f9de09e" exitCode=0 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.826059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerDied","Data":"94a94c67049b7d911caa380a3a59c9252c9b09118108fbff4855adb87f9de09e"} Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.828947 4801 generic.go:334] "Generic (PLEG): container finished" podID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerID="68ea112c77ed408c1b68b0c04ad1b04fb1233f53a3ff189033ea773d2f4f943e" exitCode=0 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.828998 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerDied","Data":"68ea112c77ed408c1b68b0c04ad1b04fb1233f53a3ff189033ea773d2f4f943e"} Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.831980 4801 generic.go:334] "Generic (PLEG): container finished" podID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerID="4356a6dc5053a7dcd2ecfcb26ed1c4a68856b99aeb6af91589873e1e78947608" exitCode=0 Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.832007 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerDied","Data":"4356a6dc5053a7dcd2ecfcb26ed1c4a68856b99aeb6af91589873e1e78947608"} Dec 06 03:11:57 crc kubenswrapper[4801]: I1206 03:11:57.957736 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.029031 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-utilities\") pod \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.029132 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-catalog-content\") pod \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.029223 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xsgn\" (UniqueName: \"kubernetes.io/projected/b9bf536e-ce23-42dc-bbaa-69626ccf959f-kube-api-access-9xsgn\") pod \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\" (UID: \"b9bf536e-ce23-42dc-bbaa-69626ccf959f\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.030476 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-utilities" (OuterVolumeSpecName: "utilities") pod "b9bf536e-ce23-42dc-bbaa-69626ccf959f" (UID: "b9bf536e-ce23-42dc-bbaa-69626ccf959f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.047089 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9bf536e-ce23-42dc-bbaa-69626ccf959f-kube-api-access-9xsgn" (OuterVolumeSpecName: "kube-api-access-9xsgn") pod "b9bf536e-ce23-42dc-bbaa-69626ccf959f" (UID: "b9bf536e-ce23-42dc-bbaa-69626ccf959f"). InnerVolumeSpecName "kube-api-access-9xsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.080121 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9bf536e-ce23-42dc-bbaa-69626ccf959f" (UID: "b9bf536e-ce23-42dc-bbaa-69626ccf959f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.085611 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.094489 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.131487 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.131520 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9bf536e-ce23-42dc-bbaa-69626ccf959f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.131531 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xsgn\" (UniqueName: \"kubernetes.io/projected/b9bf536e-ce23-42dc-bbaa-69626ccf959f-kube-api-access-9xsgn\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.179702 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.183713 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232593 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2rp2\" (UniqueName: \"kubernetes.io/projected/73474c40-4e21-4384-94be-94d4015e7668-kube-api-access-n2rp2\") pod \"73474c40-4e21-4384-94be-94d4015e7668\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232654 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-catalog-content\") pod \"98beccef-be81-4934-b000-a41b741ed810\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232722 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-catalog-content\") pod \"73474c40-4e21-4384-94be-94d4015e7668\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232768 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-operator-metrics\") pod \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232800 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-trusted-ca\") pod \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232843 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-utilities\") pod \"98beccef-be81-4934-b000-a41b741ed810\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232866 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-utilities\") pod \"73474c40-4e21-4384-94be-94d4015e7668\" (UID: \"73474c40-4e21-4384-94be-94d4015e7668\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.232888 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5np54\" (UniqueName: \"kubernetes.io/projected/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-kube-api-access-5np54\") pod \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\" (UID: \"93dc3a8f-a772-4d28-89d6-3253b6c51aa3\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.233847 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "93dc3a8f-a772-4d28-89d6-3253b6c51aa3" (UID: "93dc3a8f-a772-4d28-89d6-3253b6c51aa3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.233787 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-utilities" (OuterVolumeSpecName: "utilities") pod "98beccef-be81-4934-b000-a41b741ed810" (UID: "98beccef-be81-4934-b000-a41b741ed810"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.233981 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6qc7\" (UniqueName: \"kubernetes.io/projected/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-kube-api-access-c6qc7\") pod \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.234352 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-utilities\") pod \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.234385 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjrqh\" (UniqueName: \"kubernetes.io/projected/98beccef-be81-4934-b000-a41b741ed810-kube-api-access-gjrqh\") pod \"98beccef-be81-4934-b000-a41b741ed810\" (UID: \"98beccef-be81-4934-b000-a41b741ed810\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.234408 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-catalog-content\") pod \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\" (UID: \"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b\") " Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.234603 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-utilities" (OuterVolumeSpecName: "utilities") pod "73474c40-4e21-4384-94be-94d4015e7668" (UID: "73474c40-4e21-4384-94be-94d4015e7668"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.234636 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.234997 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.235691 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "93dc3a8f-a772-4d28-89d6-3253b6c51aa3" (UID: "93dc3a8f-a772-4d28-89d6-3253b6c51aa3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.235829 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-utilities" (OuterVolumeSpecName: "utilities") pod "1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" (UID: "1229f263-2232-4e9c-b2ac-4eabe1b3ee7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.236664 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-kube-api-access-c6qc7" (OuterVolumeSpecName: "kube-api-access-c6qc7") pod "1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" (UID: "1229f263-2232-4e9c-b2ac-4eabe1b3ee7b"). InnerVolumeSpecName "kube-api-access-c6qc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.238258 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98beccef-be81-4934-b000-a41b741ed810-kube-api-access-gjrqh" (OuterVolumeSpecName: "kube-api-access-gjrqh") pod "98beccef-be81-4934-b000-a41b741ed810" (UID: "98beccef-be81-4934-b000-a41b741ed810"). InnerVolumeSpecName "kube-api-access-gjrqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.238853 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73474c40-4e21-4384-94be-94d4015e7668-kube-api-access-n2rp2" (OuterVolumeSpecName: "kube-api-access-n2rp2") pod "73474c40-4e21-4384-94be-94d4015e7668" (UID: "73474c40-4e21-4384-94be-94d4015e7668"). InnerVolumeSpecName "kube-api-access-n2rp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.245743 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-kube-api-access-5np54" (OuterVolumeSpecName: "kube-api-access-5np54") pod "93dc3a8f-a772-4d28-89d6-3253b6c51aa3" (UID: "93dc3a8f-a772-4d28-89d6-3253b6c51aa3"). InnerVolumeSpecName "kube-api-access-5np54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.317538 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98beccef-be81-4934-b000-a41b741ed810" (UID: "98beccef-be81-4934-b000-a41b741ed810"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.326732 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" (UID: "1229f263-2232-4e9c-b2ac-4eabe1b3ee7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336275 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjrqh\" (UniqueName: \"kubernetes.io/projected/98beccef-be81-4934-b000-a41b741ed810-kube-api-access-gjrqh\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336317 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336328 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2rp2\" (UniqueName: \"kubernetes.io/projected/73474c40-4e21-4384-94be-94d4015e7668-kube-api-access-n2rp2\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336339 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98beccef-be81-4934-b000-a41b741ed810-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336349 4801 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336361 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336371 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5np54\" (UniqueName: \"kubernetes.io/projected/93dc3a8f-a772-4d28-89d6-3253b6c51aa3-kube-api-access-5np54\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336380 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6qc7\" (UniqueName: \"kubernetes.io/projected/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-kube-api-access-c6qc7\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.336389 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.393602 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73474c40-4e21-4384-94be-94d4015e7668" (UID: "73474c40-4e21-4384-94be-94d4015e7668"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.438080 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73474c40-4e21-4384-94be-94d4015e7668-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.841667 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ffnmp" event={"ID":"b9bf536e-ce23-42dc-bbaa-69626ccf959f","Type":"ContainerDied","Data":"1e25fe92a48318010c4102cda4e65f23e0eb1fe390196e90086a0f303dfe4331"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.841710 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ffnmp" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.842145 4801 scope.go:117] "RemoveContainer" containerID="4356a6dc5053a7dcd2ecfcb26ed1c4a68856b99aeb6af91589873e1e78947608" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.843573 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.843743 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-h498b" event={"ID":"93dc3a8f-a772-4d28-89d6-3253b6c51aa3","Type":"ContainerDied","Data":"86b5ec4eba04be5f0ccd29d7c5c0b458af48581aff0f14efc571baeea8d1993c"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.846611 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l77v9" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.846728 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l77v9" event={"ID":"98beccef-be81-4934-b000-a41b741ed810","Type":"ContainerDied","Data":"7de9736277473e85c6ab3559c215e6b14d98aabe7c817d632b86ba0376829d7c"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.858959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8t2r5" event={"ID":"73474c40-4e21-4384-94be-94d4015e7668","Type":"ContainerDied","Data":"2dc2d419a7ac3098e64a6399e62ed15a9f9c93f6d0a152961eb5d5fe2dd626aa"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.859075 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8t2r5" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.872229 4801 scope.go:117] "RemoveContainer" containerID="aef97fa14413311e60481dcbb5761222a29df1bdf1531235975b6f1fb5e63257" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.879380 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fn52d" event={"ID":"1229f263-2232-4e9c-b2ac-4eabe1b3ee7b","Type":"ContainerDied","Data":"d2e852399c8f20b9a96598f78b905615a47a2d6984920e68966ca26c2f4862c3"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.879460 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fn52d" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.882265 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" event={"ID":"f81f28d8-ea1d-4089-bb9c-cbe684ad3044","Type":"ContainerStarted","Data":"e587e4954abd9ad81676e9fb631621ae4b5732cd15941e4f27b2224b0c42b14a"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.882321 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" event={"ID":"f81f28d8-ea1d-4089-bb9c-cbe684ad3044","Type":"ContainerStarted","Data":"f791aa12a0ab0c9e0e1e18f3a04e5a974ca752aa0139bfe780766c8de32eeb77"} Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.883121 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.887869 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.906896 4801 scope.go:117] "RemoveContainer" containerID="5c557f7942f387cdc0b78c3d73ed7916c266f478e1ce8e034b20dd08fe602434" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.907606 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h498b"] Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.910808 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-h498b"] Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.923181 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rpzsr" podStartSLOduration=1.923157608 podStartE2EDuration="1.923157608s" podCreationTimestamp="2025-12-06 03:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:11:58.919602167 +0000 UTC m=+372.042209749" watchObservedRunningTime="2025-12-06 03:11:58.923157608 +0000 UTC m=+372.045765180" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.947640 4801 scope.go:117] "RemoveContainer" containerID="c80efe2323a55a72e0823f4eef02eb0feedbdbdfa89ceb18523384fb749451e8" Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.949534 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffnmp"] Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.953709 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ffnmp"] Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.967043 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l77v9"] Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.968511 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l77v9"] Dec 06 03:11:58 crc kubenswrapper[4801]: I1206 03:11:58.979362 4801 scope.go:117] "RemoveContainer" containerID="00046d95d661d68549ce20c410f2a2e754726746ffe22ae466de5b102815bde5" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.010553 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fn52d"] Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.023947 4801 scope.go:117] "RemoveContainer" containerID="cfe1c8b5e991a9f391b3839fe2f87170766dca412949ebd55f797b7c01b4c1f0" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.023981 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fn52d"] Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.026112 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8t2r5"] Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.030308 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8t2r5"] Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.038984 4801 scope.go:117] "RemoveContainer" containerID="14055be4011b10f39ac770f5c67050acec0682b07d37d11d81156827043027f3" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.063328 4801 scope.go:117] "RemoveContainer" containerID="94a94c67049b7d911caa380a3a59c9252c9b09118108fbff4855adb87f9de09e" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.078466 4801 scope.go:117] "RemoveContainer" containerID="f226db75da46d5d849376af707c45b487562d51add1d9394b47d8151c404febf" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.095872 4801 scope.go:117] "RemoveContainer" containerID="cb8778628fd335a5d6f0b99b5665ae4b4fd1433e051bef4baf62420ad029c917" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.111747 4801 scope.go:117] "RemoveContainer" containerID="68ea112c77ed408c1b68b0c04ad1b04fb1233f53a3ff189033ea773d2f4f943e" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.126881 4801 scope.go:117] "RemoveContainer" containerID="2c0af5d7c87a5b0cbf1c770b0e60bb544a8129a3d2302ac92e8c88a9d52a5934" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.156497 4801 scope.go:117] "RemoveContainer" containerID="26479017464c1c1dcfe4cac1f3a24f3cdd9e773d614b07a2843359fcc04016a0" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.219617 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" path="/var/lib/kubelet/pods/1229f263-2232-4e9c-b2ac-4eabe1b3ee7b/volumes" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.220746 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73474c40-4e21-4384-94be-94d4015e7668" path="/var/lib/kubelet/pods/73474c40-4e21-4384-94be-94d4015e7668/volumes" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.221816 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" path="/var/lib/kubelet/pods/93dc3a8f-a772-4d28-89d6-3253b6c51aa3/volumes" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.223640 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98beccef-be81-4934-b000-a41b741ed810" path="/var/lib/kubelet/pods/98beccef-be81-4934-b000-a41b741ed810/volumes" Dec 06 03:11:59 crc kubenswrapper[4801]: I1206 03:11:59.224429 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" path="/var/lib/kubelet/pods/b9bf536e-ce23-42dc-bbaa-69626ccf959f/volumes" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.226987 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pv4mt"] Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227584 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227606 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227626 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227637 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227652 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227664 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227683 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227694 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227708 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227720 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227738 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227749 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227794 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227803 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227814 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227822 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227833 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227841 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="extract-utilities" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227852 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227861 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227872 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227881 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227891 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227899 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.227912 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.227920 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="extract-content" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228043 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9bf536e-ce23-42dc-bbaa-69626ccf959f" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228060 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228074 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="73474c40-4e21-4384-94be-94d4015e7668" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228088 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="98beccef-be81-4934-b000-a41b741ed810" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228102 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1229f263-2232-4e9c-b2ac-4eabe1b3ee7b" containerName="registry-server" Dec 06 03:12:01 crc kubenswrapper[4801]: E1206 03:12:01.228210 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228219 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.228350 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="93dc3a8f-a772-4d28-89d6-3253b6c51aa3" containerName="marketplace-operator" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.229024 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.231237 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.250180 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pv4mt"] Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.280178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpnns\" (UniqueName: \"kubernetes.io/projected/5608f948-89f4-4888-92bd-1559fc521d5e-kube-api-access-mpnns\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.280246 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5608f948-89f4-4888-92bd-1559fc521d5e-catalog-content\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.280281 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5608f948-89f4-4888-92bd-1559fc521d5e-utilities\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.381684 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpnns\" (UniqueName: \"kubernetes.io/projected/5608f948-89f4-4888-92bd-1559fc521d5e-kube-api-access-mpnns\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.381780 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5608f948-89f4-4888-92bd-1559fc521d5e-catalog-content\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.381819 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5608f948-89f4-4888-92bd-1559fc521d5e-utilities\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.382368 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5608f948-89f4-4888-92bd-1559fc521d5e-catalog-content\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.382467 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5608f948-89f4-4888-92bd-1559fc521d5e-utilities\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.421240 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpnns\" (UniqueName: \"kubernetes.io/projected/5608f948-89f4-4888-92bd-1559fc521d5e-kube-api-access-mpnns\") pod \"certified-operators-pv4mt\" (UID: \"5608f948-89f4-4888-92bd-1559fc521d5e\") " pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.422034 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qvppv"] Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.422956 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.426389 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.441656 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvppv"] Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.482890 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbm7z\" (UniqueName: \"kubernetes.io/projected/2ca01f64-dbe9-4738-a4be-835a46b80389-kube-api-access-nbm7z\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.482962 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01f64-dbe9-4738-a4be-835a46b80389-catalog-content\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.482987 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01f64-dbe9-4738-a4be-835a46b80389-utilities\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.552431 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.584320 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbm7z\" (UniqueName: \"kubernetes.io/projected/2ca01f64-dbe9-4738-a4be-835a46b80389-kube-api-access-nbm7z\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.584413 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01f64-dbe9-4738-a4be-835a46b80389-catalog-content\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.584447 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01f64-dbe9-4738-a4be-835a46b80389-utilities\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.585085 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca01f64-dbe9-4738-a4be-835a46b80389-utilities\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.585141 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca01f64-dbe9-4738-a4be-835a46b80389-catalog-content\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.606815 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbm7z\" (UniqueName: \"kubernetes.io/projected/2ca01f64-dbe9-4738-a4be-835a46b80389-kube-api-access-nbm7z\") pod \"community-operators-qvppv\" (UID: \"2ca01f64-dbe9-4738-a4be-835a46b80389\") " pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.743040 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.937281 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qvppv"] Dec 06 03:12:01 crc kubenswrapper[4801]: W1206 03:12:01.945428 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ca01f64_dbe9_4738_a4be_835a46b80389.slice/crio-57976ccd8d2202be4c367fff528514247d18219e66c1e71ba49ae9e037cfc33a WatchSource:0}: Error finding container 57976ccd8d2202be4c367fff528514247d18219e66c1e71ba49ae9e037cfc33a: Status 404 returned error can't find the container with id 57976ccd8d2202be4c367fff528514247d18219e66c1e71ba49ae9e037cfc33a Dec 06 03:12:01 crc kubenswrapper[4801]: I1206 03:12:01.969435 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pv4mt"] Dec 06 03:12:01 crc kubenswrapper[4801]: W1206 03:12:01.975444 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5608f948_89f4_4888_92bd_1559fc521d5e.slice/crio-edec656426635dbd14c9013a634f09d8cdbc45f35980d8e43452f14446e2b485 WatchSource:0}: Error finding container edec656426635dbd14c9013a634f09d8cdbc45f35980d8e43452f14446e2b485: Status 404 returned error can't find the container with id edec656426635dbd14c9013a634f09d8cdbc45f35980d8e43452f14446e2b485 Dec 06 03:12:02 crc kubenswrapper[4801]: I1206 03:12:02.916092 4801 generic.go:334] "Generic (PLEG): container finished" podID="2ca01f64-dbe9-4738-a4be-835a46b80389" containerID="d8670c86fb6396ba92ba4e247752f94b10e4f0e06eccfdc4fce6e9e1d1d408d3" exitCode=0 Dec 06 03:12:02 crc kubenswrapper[4801]: I1206 03:12:02.916232 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvppv" event={"ID":"2ca01f64-dbe9-4738-a4be-835a46b80389","Type":"ContainerDied","Data":"d8670c86fb6396ba92ba4e247752f94b10e4f0e06eccfdc4fce6e9e1d1d408d3"} Dec 06 03:12:02 crc kubenswrapper[4801]: I1206 03:12:02.916511 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvppv" event={"ID":"2ca01f64-dbe9-4738-a4be-835a46b80389","Type":"ContainerStarted","Data":"57976ccd8d2202be4c367fff528514247d18219e66c1e71ba49ae9e037cfc33a"} Dec 06 03:12:02 crc kubenswrapper[4801]: I1206 03:12:02.929701 4801 generic.go:334] "Generic (PLEG): container finished" podID="5608f948-89f4-4888-92bd-1559fc521d5e" containerID="e105d5ceeb5b627610fbd3224c008497ca90152086744777da8c5db2fb128b5a" exitCode=0 Dec 06 03:12:02 crc kubenswrapper[4801]: I1206 03:12:02.929742 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pv4mt" event={"ID":"5608f948-89f4-4888-92bd-1559fc521d5e","Type":"ContainerDied","Data":"e105d5ceeb5b627610fbd3224c008497ca90152086744777da8c5db2fb128b5a"} Dec 06 03:12:02 crc kubenswrapper[4801]: I1206 03:12:02.929813 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pv4mt" event={"ID":"5608f948-89f4-4888-92bd-1559fc521d5e","Type":"ContainerStarted","Data":"edec656426635dbd14c9013a634f09d8cdbc45f35980d8e43452f14446e2b485"} Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.619166 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s78gq"] Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.620408 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.622062 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.629599 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s78gq"] Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.714193 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvb25\" (UniqueName: \"kubernetes.io/projected/094f9cf9-987e-4ee8-84a8-19f016fe78f2-kube-api-access-fvb25\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.714272 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094f9cf9-987e-4ee8-84a8-19f016fe78f2-utilities\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.714435 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094f9cf9-987e-4ee8-84a8-19f016fe78f2-catalog-content\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.817646 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094f9cf9-987e-4ee8-84a8-19f016fe78f2-utilities\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.817719 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094f9cf9-987e-4ee8-84a8-19f016fe78f2-catalog-content\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.817819 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvb25\" (UniqueName: \"kubernetes.io/projected/094f9cf9-987e-4ee8-84a8-19f016fe78f2-kube-api-access-fvb25\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.818512 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/094f9cf9-987e-4ee8-84a8-19f016fe78f2-utilities\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.818722 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/094f9cf9-987e-4ee8-84a8-19f016fe78f2-catalog-content\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.832515 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6w9gj"] Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.833829 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.835434 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6w9gj"] Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.835927 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.843476 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvb25\" (UniqueName: \"kubernetes.io/projected/094f9cf9-987e-4ee8-84a8-19f016fe78f2-kube-api-access-fvb25\") pod \"redhat-marketplace-s78gq\" (UID: \"094f9cf9-987e-4ee8-84a8-19f016fe78f2\") " pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.919632 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmb4\" (UniqueName: \"kubernetes.io/projected/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-kube-api-access-wvmb4\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.919713 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-utilities\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.919794 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-catalog-content\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:03 crc kubenswrapper[4801]: I1206 03:12:03.941537 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.023691 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmb4\" (UniqueName: \"kubernetes.io/projected/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-kube-api-access-wvmb4\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.023739 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-utilities\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.023867 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-catalog-content\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.024725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-catalog-content\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.025270 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-utilities\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.052595 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmb4\" (UniqueName: \"kubernetes.io/projected/d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720-kube-api-access-wvmb4\") pod \"redhat-operators-6w9gj\" (UID: \"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720\") " pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.180605 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.346718 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s78gq"] Dec 06 03:12:04 crc kubenswrapper[4801]: W1206 03:12:04.353964 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094f9cf9_987e_4ee8_84a8_19f016fe78f2.slice/crio-3e57a9ae3747e1ec71d7c9a837a0a2241e0f942ec0a2acb5d48b9e5f7b134f18 WatchSource:0}: Error finding container 3e57a9ae3747e1ec71d7c9a837a0a2241e0f942ec0a2acb5d48b9e5f7b134f18: Status 404 returned error can't find the container with id 3e57a9ae3747e1ec71d7c9a837a0a2241e0f942ec0a2acb5d48b9e5f7b134f18 Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.573492 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6w9gj"] Dec 06 03:12:04 crc kubenswrapper[4801]: W1206 03:12:04.578676 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4fa5d86_6a1b_4a0b_811d_e33f2e0d9720.slice/crio-4400b5cfa9cd73533b53a768133ab5c0793698404159ee78789eeb0d67160a65 WatchSource:0}: Error finding container 4400b5cfa9cd73533b53a768133ab5c0793698404159ee78789eeb0d67160a65: Status 404 returned error can't find the container with id 4400b5cfa9cd73533b53a768133ab5c0793698404159ee78789eeb0d67160a65 Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.958398 4801 generic.go:334] "Generic (PLEG): container finished" podID="094f9cf9-987e-4ee8-84a8-19f016fe78f2" containerID="a1a9c895a412792a889cf13a7e51148e63d0311ea196abcc475a5f071b8c2164" exitCode=0 Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.958447 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s78gq" event={"ID":"094f9cf9-987e-4ee8-84a8-19f016fe78f2","Type":"ContainerDied","Data":"a1a9c895a412792a889cf13a7e51148e63d0311ea196abcc475a5f071b8c2164"} Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.960517 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s78gq" event={"ID":"094f9cf9-987e-4ee8-84a8-19f016fe78f2","Type":"ContainerStarted","Data":"3e57a9ae3747e1ec71d7c9a837a0a2241e0f942ec0a2acb5d48b9e5f7b134f18"} Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.964000 4801 generic.go:334] "Generic (PLEG): container finished" podID="2ca01f64-dbe9-4738-a4be-835a46b80389" containerID="367fd3011e5f8bca5787bee140080f0d0e50dac68a2cf1dd5825c841ff551cea" exitCode=0 Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.964693 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvppv" event={"ID":"2ca01f64-dbe9-4738-a4be-835a46b80389","Type":"ContainerDied","Data":"367fd3011e5f8bca5787bee140080f0d0e50dac68a2cf1dd5825c841ff551cea"} Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.966425 4801 generic.go:334] "Generic (PLEG): container finished" podID="d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720" containerID="9af1c77e611bda5d85a098a63b7aecae3fb90746af9cf0b956391ff3c382bf8c" exitCode=0 Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.966478 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w9gj" event={"ID":"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720","Type":"ContainerDied","Data":"9af1c77e611bda5d85a098a63b7aecae3fb90746af9cf0b956391ff3c382bf8c"} Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.966500 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w9gj" event={"ID":"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720","Type":"ContainerStarted","Data":"4400b5cfa9cd73533b53a768133ab5c0793698404159ee78789eeb0d67160a65"} Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.969532 4801 generic.go:334] "Generic (PLEG): container finished" podID="5608f948-89f4-4888-92bd-1559fc521d5e" containerID="27cb81af3c899ed72041f34de0a8ecaed7228d3393ab32272c6b0583b2a6112c" exitCode=0 Dec 06 03:12:04 crc kubenswrapper[4801]: I1206 03:12:04.969575 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pv4mt" event={"ID":"5608f948-89f4-4888-92bd-1559fc521d5e","Type":"ContainerDied","Data":"27cb81af3c899ed72041f34de0a8ecaed7228d3393ab32272c6b0583b2a6112c"} Dec 06 03:12:05 crc kubenswrapper[4801]: I1206 03:12:05.984740 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pv4mt" event={"ID":"5608f948-89f4-4888-92bd-1559fc521d5e","Type":"ContainerStarted","Data":"6c53b728c95674b907a2ef67e8aa0644b9873577e16a99d85ff962d9e9d25732"} Dec 06 03:12:05 crc kubenswrapper[4801]: I1206 03:12:05.989145 4801 generic.go:334] "Generic (PLEG): container finished" podID="094f9cf9-987e-4ee8-84a8-19f016fe78f2" containerID="1c266e1c9a7495fdd7b056ca3ce5cb04d5c8d29a93d09223f6de02a3ad09e1fe" exitCode=0 Dec 06 03:12:05 crc kubenswrapper[4801]: I1206 03:12:05.989183 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s78gq" event={"ID":"094f9cf9-987e-4ee8-84a8-19f016fe78f2","Type":"ContainerDied","Data":"1c266e1c9a7495fdd7b056ca3ce5cb04d5c8d29a93d09223f6de02a3ad09e1fe"} Dec 06 03:12:05 crc kubenswrapper[4801]: I1206 03:12:05.991559 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w9gj" event={"ID":"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720","Type":"ContainerStarted","Data":"81516c43cba4b22e23ee9781259a9a68bb7995fc84ce87b0ffd7c79764e74d55"} Dec 06 03:12:06 crc kubenswrapper[4801]: I1206 03:12:06.006188 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pv4mt" podStartSLOduration=2.5800585700000003 podStartE2EDuration="5.006167002s" podCreationTimestamp="2025-12-06 03:12:01 +0000 UTC" firstStartedPulling="2025-12-06 03:12:02.936053469 +0000 UTC m=+376.058661041" lastFinishedPulling="2025-12-06 03:12:05.362161901 +0000 UTC m=+378.484769473" observedRunningTime="2025-12-06 03:12:06.003406643 +0000 UTC m=+379.126014215" watchObservedRunningTime="2025-12-06 03:12:06.006167002 +0000 UTC m=+379.128774574" Dec 06 03:12:07 crc kubenswrapper[4801]: I1206 03:12:07.001363 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qvppv" event={"ID":"2ca01f64-dbe9-4738-a4be-835a46b80389","Type":"ContainerStarted","Data":"76827213b126de5bdbb4866f870eb734e9995a22f579806c75232a7725391d69"} Dec 06 03:12:07 crc kubenswrapper[4801]: I1206 03:12:07.006279 4801 generic.go:334] "Generic (PLEG): container finished" podID="d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720" containerID="81516c43cba4b22e23ee9781259a9a68bb7995fc84ce87b0ffd7c79764e74d55" exitCode=0 Dec 06 03:12:07 crc kubenswrapper[4801]: I1206 03:12:07.006363 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w9gj" event={"ID":"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720","Type":"ContainerDied","Data":"81516c43cba4b22e23ee9781259a9a68bb7995fc84ce87b0ffd7c79764e74d55"} Dec 06 03:12:07 crc kubenswrapper[4801]: I1206 03:12:07.014433 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s78gq" event={"ID":"094f9cf9-987e-4ee8-84a8-19f016fe78f2","Type":"ContainerStarted","Data":"7b8bf0f3ef01d06a184eb1b88b8815662fbbd1138f9cc4cca83e375b63553463"} Dec 06 03:12:07 crc kubenswrapper[4801]: I1206 03:12:07.022534 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qvppv" podStartSLOduration=3.345840269 podStartE2EDuration="6.022514016s" podCreationTimestamp="2025-12-06 03:12:01 +0000 UTC" firstStartedPulling="2025-12-06 03:12:02.929190324 +0000 UTC m=+376.051797896" lastFinishedPulling="2025-12-06 03:12:05.605864081 +0000 UTC m=+378.728471643" observedRunningTime="2025-12-06 03:12:07.021667041 +0000 UTC m=+380.144274613" watchObservedRunningTime="2025-12-06 03:12:07.022514016 +0000 UTC m=+380.145121588" Dec 06 03:12:07 crc kubenswrapper[4801]: I1206 03:12:07.058028 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s78gq" podStartSLOduration=2.610951807 podStartE2EDuration="4.058008246s" podCreationTimestamp="2025-12-06 03:12:03 +0000 UTC" firstStartedPulling="2025-12-06 03:12:04.959908246 +0000 UTC m=+378.082515828" lastFinishedPulling="2025-12-06 03:12:06.406964675 +0000 UTC m=+379.529572267" observedRunningTime="2025-12-06 03:12:07.057560944 +0000 UTC m=+380.180168516" watchObservedRunningTime="2025-12-06 03:12:07.058008246 +0000 UTC m=+380.180615808" Dec 06 03:12:08 crc kubenswrapper[4801]: I1206 03:12:08.026579 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6w9gj" event={"ID":"d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720","Type":"ContainerStarted","Data":"dff25c445c0f2bc58ba43711cc8d21a1f0b25999e95d3389f328874c6db4c0f0"} Dec 06 03:12:08 crc kubenswrapper[4801]: I1206 03:12:08.046483 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6w9gj" podStartSLOduration=2.544880404 podStartE2EDuration="5.046464356s" podCreationTimestamp="2025-12-06 03:12:03 +0000 UTC" firstStartedPulling="2025-12-06 03:12:04.967631765 +0000 UTC m=+378.090239337" lastFinishedPulling="2025-12-06 03:12:07.469215717 +0000 UTC m=+380.591823289" observedRunningTime="2025-12-06 03:12:08.04167262 +0000 UTC m=+381.164280192" watchObservedRunningTime="2025-12-06 03:12:08.046464356 +0000 UTC m=+381.169071938" Dec 06 03:12:09 crc kubenswrapper[4801]: I1206 03:12:09.823798 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-tn2s5" Dec 06 03:12:09 crc kubenswrapper[4801]: I1206 03:12:09.906151 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96wqb"] Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.169294 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.169590 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.552694 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.552747 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.595188 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.744187 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.744544 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:11 crc kubenswrapper[4801]: I1206 03:12:11.779291 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:12 crc kubenswrapper[4801]: I1206 03:12:12.085147 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pv4mt" Dec 06 03:12:12 crc kubenswrapper[4801]: I1206 03:12:12.087859 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qvppv" Dec 06 03:12:13 crc kubenswrapper[4801]: I1206 03:12:13.942440 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:13 crc kubenswrapper[4801]: I1206 03:12:13.942858 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:13 crc kubenswrapper[4801]: I1206 03:12:13.985905 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:14 crc kubenswrapper[4801]: I1206 03:12:14.102903 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s78gq" Dec 06 03:12:14 crc kubenswrapper[4801]: I1206 03:12:14.181555 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:14 crc kubenswrapper[4801]: I1206 03:12:14.182692 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:14 crc kubenswrapper[4801]: I1206 03:12:14.230787 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:15 crc kubenswrapper[4801]: I1206 03:12:15.102382 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6w9gj" Dec 06 03:12:34 crc kubenswrapper[4801]: I1206 03:12:34.944683 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" podUID="40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" containerName="registry" containerID="cri-o://1d4a58ca413928d9c8f6a37d3c282d9ccfc83a30d36bb807d6cdec371c34dd91" gracePeriod=30 Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.195111 4801 generic.go:334] "Generic (PLEG): container finished" podID="40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" containerID="1d4a58ca413928d9c8f6a37d3c282d9ccfc83a30d36bb807d6cdec371c34dd91" exitCode=0 Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.195385 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" event={"ID":"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5","Type":"ContainerDied","Data":"1d4a58ca413928d9c8f6a37d3c282d9ccfc83a30d36bb807d6cdec371c34dd91"} Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.390129 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477298 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvlnk\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-kube-api-access-hvlnk\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477359 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-trusted-ca\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477557 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477594 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-certificates\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477680 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-installation-pull-secrets\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477710 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-bound-sa-token\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477771 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-tls\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.477800 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-ca-trust-extracted\") pod \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\" (UID: \"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5\") " Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.478450 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.479209 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.484881 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-kube-api-access-hvlnk" (OuterVolumeSpecName: "kube-api-access-hvlnk") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "kube-api-access-hvlnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.485402 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.485743 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.487846 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.487977 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.495326 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" (UID: "40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580173 4801 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580233 4801 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580249 4801 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580266 4801 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580281 4801 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580297 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvlnk\" (UniqueName: \"kubernetes.io/projected/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-kube-api-access-hvlnk\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:35 crc kubenswrapper[4801]: I1206 03:12:35.580310 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:12:36 crc kubenswrapper[4801]: I1206 03:12:36.206929 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" event={"ID":"40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5","Type":"ContainerDied","Data":"8ec14393e4a7fd769476ffe4aba75bdd68f37afcae20d7d4096ec001fcec599a"} Dec 06 03:12:36 crc kubenswrapper[4801]: I1206 03:12:36.206994 4801 scope.go:117] "RemoveContainer" containerID="1d4a58ca413928d9c8f6a37d3c282d9ccfc83a30d36bb807d6cdec371c34dd91" Dec 06 03:12:36 crc kubenswrapper[4801]: I1206 03:12:36.207056 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-96wqb" Dec 06 03:12:36 crc kubenswrapper[4801]: I1206 03:12:36.263277 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96wqb"] Dec 06 03:12:36 crc kubenswrapper[4801]: I1206 03:12:36.270287 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-96wqb"] Dec 06 03:12:37 crc kubenswrapper[4801]: I1206 03:12:37.225155 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" path="/var/lib/kubelet/pods/40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5/volumes" Dec 06 03:12:41 crc kubenswrapper[4801]: I1206 03:12:41.170236 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:12:41 crc kubenswrapper[4801]: I1206 03:12:41.170323 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:12:41 crc kubenswrapper[4801]: I1206 03:12:41.170392 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:12:41 crc kubenswrapper[4801]: I1206 03:12:41.171164 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8cb9cda2b5ef7be9aa14d9ed5af31e70042e45e618144723dbce6c2cbb236c06"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:12:41 crc kubenswrapper[4801]: I1206 03:12:41.171281 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://8cb9cda2b5ef7be9aa14d9ed5af31e70042e45e618144723dbce6c2cbb236c06" gracePeriod=600 Dec 06 03:12:42 crc kubenswrapper[4801]: I1206 03:12:42.251572 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="8cb9cda2b5ef7be9aa14d9ed5af31e70042e45e618144723dbce6c2cbb236c06" exitCode=0 Dec 06 03:12:42 crc kubenswrapper[4801]: I1206 03:12:42.251750 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"8cb9cda2b5ef7be9aa14d9ed5af31e70042e45e618144723dbce6c2cbb236c06"} Dec 06 03:12:42 crc kubenswrapper[4801]: I1206 03:12:42.252581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"ef748f9e791af1c653dbbe7c635d2ae63238600b789d7bc247b50a8e9c125baf"} Dec 06 03:12:42 crc kubenswrapper[4801]: I1206 03:12:42.252654 4801 scope.go:117] "RemoveContainer" containerID="597c9c69810084e7e4768814de0ea59822551773678076d8498a1ea045dafbf5" Dec 06 03:14:41 crc kubenswrapper[4801]: I1206 03:14:41.170234 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:14:41 crc kubenswrapper[4801]: I1206 03:14:41.170885 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.176161 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7"] Dec 06 03:15:00 crc kubenswrapper[4801]: E1206 03:15:00.177234 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" containerName="registry" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.177252 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" containerName="registry" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.177884 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e9cd29-e5e3-44c5-95e8-d6e799eaf1f5" containerName="registry" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.179311 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.186777 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.186811 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.191413 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7"] Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.238771 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6vd\" (UniqueName: \"kubernetes.io/projected/81dd37a5-79be-462b-83ca-8b4900c7af34-kube-api-access-6l6vd\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.238883 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81dd37a5-79be-462b-83ca-8b4900c7af34-config-volume\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.239089 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81dd37a5-79be-462b-83ca-8b4900c7af34-secret-volume\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.340712 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81dd37a5-79be-462b-83ca-8b4900c7af34-secret-volume\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.340796 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6vd\" (UniqueName: \"kubernetes.io/projected/81dd37a5-79be-462b-83ca-8b4900c7af34-kube-api-access-6l6vd\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.340847 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81dd37a5-79be-462b-83ca-8b4900c7af34-config-volume\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.341690 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81dd37a5-79be-462b-83ca-8b4900c7af34-config-volume\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.348261 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81dd37a5-79be-462b-83ca-8b4900c7af34-secret-volume\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.357369 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6vd\" (UniqueName: \"kubernetes.io/projected/81dd37a5-79be-462b-83ca-8b4900c7af34-kube-api-access-6l6vd\") pod \"collect-profiles-29416515-fr4t7\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.548064 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:00 crc kubenswrapper[4801]: I1206 03:15:00.738546 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7"] Dec 06 03:15:01 crc kubenswrapper[4801]: I1206 03:15:01.149860 4801 generic.go:334] "Generic (PLEG): container finished" podID="81dd37a5-79be-462b-83ca-8b4900c7af34" containerID="bc7defeeeb0c84cec11f23e4901699adb2d18fc9e0d3b48c133973d313a47984" exitCode=0 Dec 06 03:15:01 crc kubenswrapper[4801]: I1206 03:15:01.149959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" event={"ID":"81dd37a5-79be-462b-83ca-8b4900c7af34","Type":"ContainerDied","Data":"bc7defeeeb0c84cec11f23e4901699adb2d18fc9e0d3b48c133973d313a47984"} Dec 06 03:15:01 crc kubenswrapper[4801]: I1206 03:15:01.149987 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" event={"ID":"81dd37a5-79be-462b-83ca-8b4900c7af34","Type":"ContainerStarted","Data":"344a7fb893c37cfdee56a289b0167d9e970c9d1b98aa56985978c90a988877bf"} Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.378047 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.469553 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l6vd\" (UniqueName: \"kubernetes.io/projected/81dd37a5-79be-462b-83ca-8b4900c7af34-kube-api-access-6l6vd\") pod \"81dd37a5-79be-462b-83ca-8b4900c7af34\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.469609 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81dd37a5-79be-462b-83ca-8b4900c7af34-secret-volume\") pod \"81dd37a5-79be-462b-83ca-8b4900c7af34\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.469724 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81dd37a5-79be-462b-83ca-8b4900c7af34-config-volume\") pod \"81dd37a5-79be-462b-83ca-8b4900c7af34\" (UID: \"81dd37a5-79be-462b-83ca-8b4900c7af34\") " Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.470641 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dd37a5-79be-462b-83ca-8b4900c7af34-config-volume" (OuterVolumeSpecName: "config-volume") pod "81dd37a5-79be-462b-83ca-8b4900c7af34" (UID: "81dd37a5-79be-462b-83ca-8b4900c7af34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.475338 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81dd37a5-79be-462b-83ca-8b4900c7af34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "81dd37a5-79be-462b-83ca-8b4900c7af34" (UID: "81dd37a5-79be-462b-83ca-8b4900c7af34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.481907 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81dd37a5-79be-462b-83ca-8b4900c7af34-kube-api-access-6l6vd" (OuterVolumeSpecName: "kube-api-access-6l6vd") pod "81dd37a5-79be-462b-83ca-8b4900c7af34" (UID: "81dd37a5-79be-462b-83ca-8b4900c7af34"). InnerVolumeSpecName "kube-api-access-6l6vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.571578 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81dd37a5-79be-462b-83ca-8b4900c7af34-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.571617 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/81dd37a5-79be-462b-83ca-8b4900c7af34-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:15:02 crc kubenswrapper[4801]: I1206 03:15:02.571628 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l6vd\" (UniqueName: \"kubernetes.io/projected/81dd37a5-79be-462b-83ca-8b4900c7af34-kube-api-access-6l6vd\") on node \"crc\" DevicePath \"\"" Dec 06 03:15:03 crc kubenswrapper[4801]: I1206 03:15:03.164981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" event={"ID":"81dd37a5-79be-462b-83ca-8b4900c7af34","Type":"ContainerDied","Data":"344a7fb893c37cfdee56a289b0167d9e970c9d1b98aa56985978c90a988877bf"} Dec 06 03:15:03 crc kubenswrapper[4801]: I1206 03:15:03.165030 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7" Dec 06 03:15:03 crc kubenswrapper[4801]: I1206 03:15:03.165058 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344a7fb893c37cfdee56a289b0167d9e970c9d1b98aa56985978c90a988877bf" Dec 06 03:15:11 crc kubenswrapper[4801]: I1206 03:15:11.170540 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:15:11 crc kubenswrapper[4801]: I1206 03:15:11.171603 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:15:41 crc kubenswrapper[4801]: I1206 03:15:41.170497 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:15:41 crc kubenswrapper[4801]: I1206 03:15:41.171301 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:15:41 crc kubenswrapper[4801]: I1206 03:15:41.171385 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:15:41 crc kubenswrapper[4801]: I1206 03:15:41.172383 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef748f9e791af1c653dbbe7c635d2ae63238600b789d7bc247b50a8e9c125baf"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:15:41 crc kubenswrapper[4801]: I1206 03:15:41.172486 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://ef748f9e791af1c653dbbe7c635d2ae63238600b789d7bc247b50a8e9c125baf" gracePeriod=600 Dec 06 03:15:42 crc kubenswrapper[4801]: I1206 03:15:42.412804 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"ef748f9e791af1c653dbbe7c635d2ae63238600b789d7bc247b50a8e9c125baf"} Dec 06 03:15:42 crc kubenswrapper[4801]: I1206 03:15:42.413190 4801 scope.go:117] "RemoveContainer" containerID="8cb9cda2b5ef7be9aa14d9ed5af31e70042e45e618144723dbce6c2cbb236c06" Dec 06 03:15:42 crc kubenswrapper[4801]: I1206 03:15:42.412840 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="ef748f9e791af1c653dbbe7c635d2ae63238600b789d7bc247b50a8e9c125baf" exitCode=0 Dec 06 03:15:43 crc kubenswrapper[4801]: I1206 03:15:43.421452 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"be334762e587af043257db835d7f2dd94a8df53291dd8b1402e414ca26dd1b3c"} Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.805473 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-464bh"] Dec 06 03:17:52 crc kubenswrapper[4801]: E1206 03:17:52.806275 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dd37a5-79be-462b-83ca-8b4900c7af34" containerName="collect-profiles" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.806293 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dd37a5-79be-462b-83ca-8b4900c7af34" containerName="collect-profiles" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.806410 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="81dd37a5-79be-462b-83ca-8b4900c7af34" containerName="collect-profiles" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.806893 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.808835 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.808989 4801 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-w9q4s" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.811242 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.818450 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d5sqp"] Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.819307 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d5sqp" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.823144 4801 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-87vz9" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.832485 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8cc5"] Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.833991 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.837229 4801 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-8xchq" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.839422 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-464bh"] Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.844041 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d5sqp"] Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.857194 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8cc5"] Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.913137 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fm7k\" (UniqueName: \"kubernetes.io/projected/d493aca4-f8ca-4a5d-8f13-2776e232fb01-kube-api-access-5fm7k\") pod \"cert-manager-5b446d88c5-d5sqp\" (UID: \"d493aca4-f8ca-4a5d-8f13-2776e232fb01\") " pod="cert-manager/cert-manager-5b446d88c5-d5sqp" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.913230 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltzz\" (UniqueName: \"kubernetes.io/projected/0ff9519f-3a68-4d4f-9a7b-b09282b3db2b-kube-api-access-7ltzz\") pod \"cert-manager-webhook-5655c58dd6-l8cc5\" (UID: \"0ff9519f-3a68-4d4f-9a7b-b09282b3db2b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:17:52 crc kubenswrapper[4801]: I1206 03:17:52.913270 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9dfn\" (UniqueName: \"kubernetes.io/projected/01b87268-3f8a-4d05-84bb-2c19e182dfb3-kube-api-access-r9dfn\") pod \"cert-manager-cainjector-7f985d654d-464bh\" (UID: \"01b87268-3f8a-4d05-84bb-2c19e182dfb3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.014704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ltzz\" (UniqueName: \"kubernetes.io/projected/0ff9519f-3a68-4d4f-9a7b-b09282b3db2b-kube-api-access-7ltzz\") pod \"cert-manager-webhook-5655c58dd6-l8cc5\" (UID: \"0ff9519f-3a68-4d4f-9a7b-b09282b3db2b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.014773 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9dfn\" (UniqueName: \"kubernetes.io/projected/01b87268-3f8a-4d05-84bb-2c19e182dfb3-kube-api-access-r9dfn\") pod \"cert-manager-cainjector-7f985d654d-464bh\" (UID: \"01b87268-3f8a-4d05-84bb-2c19e182dfb3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.014815 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fm7k\" (UniqueName: \"kubernetes.io/projected/d493aca4-f8ca-4a5d-8f13-2776e232fb01-kube-api-access-5fm7k\") pod \"cert-manager-5b446d88c5-d5sqp\" (UID: \"d493aca4-f8ca-4a5d-8f13-2776e232fb01\") " pod="cert-manager/cert-manager-5b446d88c5-d5sqp" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.031517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9dfn\" (UniqueName: \"kubernetes.io/projected/01b87268-3f8a-4d05-84bb-2c19e182dfb3-kube-api-access-r9dfn\") pod \"cert-manager-cainjector-7f985d654d-464bh\" (UID: \"01b87268-3f8a-4d05-84bb-2c19e182dfb3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.032577 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ltzz\" (UniqueName: \"kubernetes.io/projected/0ff9519f-3a68-4d4f-9a7b-b09282b3db2b-kube-api-access-7ltzz\") pod \"cert-manager-webhook-5655c58dd6-l8cc5\" (UID: \"0ff9519f-3a68-4d4f-9a7b-b09282b3db2b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.036507 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fm7k\" (UniqueName: \"kubernetes.io/projected/d493aca4-f8ca-4a5d-8f13-2776e232fb01-kube-api-access-5fm7k\") pod \"cert-manager-5b446d88c5-d5sqp\" (UID: \"d493aca4-f8ca-4a5d-8f13-2776e232fb01\") " pod="cert-manager/cert-manager-5b446d88c5-d5sqp" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.123494 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.133541 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d5sqp" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.156288 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.342852 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-464bh"] Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.358157 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.367745 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d5sqp"] Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.405866 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8cc5"] Dec 06 03:17:53 crc kubenswrapper[4801]: W1206 03:17:53.409823 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ff9519f_3a68_4d4f_9a7b_b09282b3db2b.slice/crio-4b812de8bfe9280e2ea7d5ee14d712bed1c2b6e99da2fd75f89486a352409f1f WatchSource:0}: Error finding container 4b812de8bfe9280e2ea7d5ee14d712bed1c2b6e99da2fd75f89486a352409f1f: Status 404 returned error can't find the container with id 4b812de8bfe9280e2ea7d5ee14d712bed1c2b6e99da2fd75f89486a352409f1f Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.690223 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" event={"ID":"0ff9519f-3a68-4d4f-9a7b-b09282b3db2b","Type":"ContainerStarted","Data":"4b812de8bfe9280e2ea7d5ee14d712bed1c2b6e99da2fd75f89486a352409f1f"} Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.696220 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" event={"ID":"01b87268-3f8a-4d05-84bb-2c19e182dfb3","Type":"ContainerStarted","Data":"215926e3c83ce00738e3cdd0689d2241520eaad1bbc3864c7551405bef5ae7fc"} Dec 06 03:17:53 crc kubenswrapper[4801]: I1206 03:17:53.697425 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d5sqp" event={"ID":"d493aca4-f8ca-4a5d-8f13-2776e232fb01","Type":"ContainerStarted","Data":"39f8b667ee8b27a513fa0ef1e4a140039f6485414dd326bac0c9fd395fa4f9f6"} Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.485336 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8qjvm"] Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490062 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-controller" containerID="cri-o://31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490149 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="sbdb" containerID="cri-o://db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490200 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="northd" containerID="cri-o://845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490185 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490192 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-node" containerID="cri-o://0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490110 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="nbdb" containerID="cri-o://e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.490264 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-acl-logging" containerID="cri-o://64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76" gracePeriod=30 Dec 06 03:18:03 crc kubenswrapper[4801]: I1206 03:18:03.525639 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" containerID="cri-o://c0d94a9b76a1f23733f58d1d54b46267232b326ffef3a6f088ad1070affb1cfa" gracePeriod=30 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.771395 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/2.log" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.771962 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/1.log" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.772003 4801 generic.go:334] "Generic (PLEG): container finished" podID="9695c5a7-610b-4c76-aa6f-b4f06f20823e" containerID="e7ad082dc60aaf9e0b81f57ccc6e014f7624f603687c31db16438aa3fd0fb4a3" exitCode=2 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.772064 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerDied","Data":"e7ad082dc60aaf9e0b81f57ccc6e014f7624f603687c31db16438aa3fd0fb4a3"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.772111 4801 scope.go:117] "RemoveContainer" containerID="bf47644041b61ea191d0d8bd6e49d093a5c5aee11a8de06feb32278fd5e591af" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.772616 4801 scope.go:117] "RemoveContainer" containerID="e7ad082dc60aaf9e0b81f57ccc6e014f7624f603687c31db16438aa3fd0fb4a3" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.776830 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovnkube-controller/3.log" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.781177 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovn-acl-logging/0.log" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.781828 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovn-controller/0.log" Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782282 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="c0d94a9b76a1f23733f58d1d54b46267232b326ffef3a6f088ad1070affb1cfa" exitCode=0 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782313 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad" exitCode=0 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782326 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09" exitCode=0 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782337 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c" exitCode=0 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782347 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1" exitCode=0 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782356 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2" exitCode=0 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782366 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76" exitCode=143 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782376 4801 generic.go:334] "Generic (PLEG): container finished" podID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerID="31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029" exitCode=143 Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782430 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"c0d94a9b76a1f23733f58d1d54b46267232b326ffef3a6f088ad1070affb1cfa"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782492 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782509 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782524 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782536 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782591 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782608 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.782621 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029"} Dec 06 03:18:04 crc kubenswrapper[4801]: I1206 03:18:04.927361 4801 scope.go:117] "RemoveContainer" containerID="9392c82fcac2133d2ac6bc7f63ae3abeee165de1d0f5c4d90327c938d0ced66e" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.024761 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovn-acl-logging/0.log" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.025740 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovn-controller/0.log" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.026893 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.096733 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-lw9bh"] Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097182 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="northd" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097202 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="northd" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097221 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097230 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097248 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097258 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097269 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="nbdb" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097278 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="nbdb" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097290 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097299 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097310 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097319 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097331 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kubecfg-setup" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097340 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kubecfg-setup" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097353 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-acl-logging" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097363 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-acl-logging" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097373 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="sbdb" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097381 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="sbdb" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097399 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097409 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097423 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-node" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097432 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-node" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097569 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-node" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097582 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="northd" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097594 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097606 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097621 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097651 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097660 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097674 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097688 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="nbdb" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097699 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovn-acl-logging" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097714 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="sbdb" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097891 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097902 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: E1206 03:18:05.097915 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.097926 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.098066 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" containerName="ovnkube-controller" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.100664 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195250 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-var-lib-openvswitch\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195333 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-systemd-units\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195373 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-ovn-kubernetes\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195411 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195421 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195471 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-node-log\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195469 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195527 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-node-log" (OuterVolumeSpecName: "node-log") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195500 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195518 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195563 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-netd\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195650 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195696 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-slash\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195820 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-slash" (OuterVolumeSpecName: "host-slash") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195867 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cd76211-e203-4b5b-98b0-102d3d67315d-ovn-node-metrics-cert\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.195892 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-systemd\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.196956 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-script-lib\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197002 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-env-overrides\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197032 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2f9\" (UniqueName: \"kubernetes.io/projected/2cd76211-e203-4b5b-98b0-102d3d67315d-kube-api-access-qs2f9\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197072 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-ovn\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197098 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-kubelet\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197151 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-openvswitch\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197188 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-bin\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197217 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-netns\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197247 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-etc-openvswitch\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197257 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197309 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197317 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197392 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197368 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197374 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-log-socket\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197405 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-log-socket" (OuterVolumeSpecName: "log-socket") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197432 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197479 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-config\") pod \"2cd76211-e203-4b5b-98b0-102d3d67315d\" (UID: \"2cd76211-e203-4b5b-98b0-102d3d67315d\") " Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197578 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197603 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197693 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197736 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197767 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.197826 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovn-node-metrics-cert\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198008 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-slash\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198065 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-cni-bin\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198083 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrxs\" (UniqueName: \"kubernetes.io/projected/91f574c5-ec26-4bc5-b784-81b899dbf5bc-kube-api-access-csrxs\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-env-overrides\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-run-netns\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198303 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-cni-netd\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198336 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-systemd-units\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198370 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-systemd\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198413 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovnkube-config\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198439 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovnkube-script-lib\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198486 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-ovn\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198521 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-node-log\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-etc-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198573 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-kubelet\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198598 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-var-lib-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198621 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-log-socket\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198679 4801 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198694 4801 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198707 4801 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198722 4801 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198734 4801 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-log-socket\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198748 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198782 4801 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198795 4801 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198809 4801 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198823 4801 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198835 4801 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-node-log\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198848 4801 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198864 4801 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-slash\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198877 4801 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198891 4801 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2cd76211-e203-4b5b-98b0-102d3d67315d-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198905 4801 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.198917 4801 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.203092 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd76211-e203-4b5b-98b0-102d3d67315d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.203996 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd76211-e203-4b5b-98b0-102d3d67315d-kube-api-access-qs2f9" (OuterVolumeSpecName: "kube-api-access-qs2f9") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "kube-api-access-qs2f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.210904 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2cd76211-e203-4b5b-98b0-102d3d67315d" (UID: "2cd76211-e203-4b5b-98b0-102d3d67315d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300075 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovnkube-config\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300691 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovnkube-script-lib\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300757 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-ovn\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-node-log\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300826 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-etc-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300848 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-log-socket\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300874 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-kubelet\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300902 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-var-lib-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300928 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.300978 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovn-node-metrics-cert\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301053 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-slash\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301077 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-cni-bin\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301104 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csrxs\" (UniqueName: \"kubernetes.io/projected/91f574c5-ec26-4bc5-b784-81b899dbf5bc-kube-api-access-csrxs\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301129 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-env-overrides\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301157 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-run-netns\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301179 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-cni-netd\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301205 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-systemd-units\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301238 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-systemd\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301299 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2cd76211-e203-4b5b-98b0-102d3d67315d-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301315 4801 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2cd76211-e203-4b5b-98b0-102d3d67315d-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301318 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovnkube-config\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301328 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2f9\" (UniqueName: \"kubernetes.io/projected/2cd76211-e203-4b5b-98b0-102d3d67315d-kube-api-access-qs2f9\") on node \"crc\" DevicePath \"\"" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301388 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-systemd\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301455 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301456 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301895 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-slash\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.301970 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-cni-bin\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302011 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovnkube-script-lib\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302064 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-run-ovn\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302126 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-node-log\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302162 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-etc-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302193 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-log-socket\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302228 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-kubelet\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302259 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-var-lib-openvswitch\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302324 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-run-netns\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302563 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-host-cni-netd\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/91f574c5-ec26-4bc5-b784-81b899dbf5bc-systemd-units\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.302662 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/91f574c5-ec26-4bc5-b784-81b899dbf5bc-env-overrides\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.306524 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/91f574c5-ec26-4bc5-b784-81b899dbf5bc-ovn-node-metrics-cert\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.328732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csrxs\" (UniqueName: \"kubernetes.io/projected/91f574c5-ec26-4bc5-b784-81b899dbf5bc-kube-api-access-csrxs\") pod \"ovnkube-node-lw9bh\" (UID: \"91f574c5-ec26-4bc5-b784-81b899dbf5bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.436782 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:05 crc kubenswrapper[4801]: W1206 03:18:05.453911 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91f574c5_ec26_4bc5_b784_81b899dbf5bc.slice/crio-fd04cb53580bae7133c28c0d5c0b6f179fc802baa4f2a9e051dcdc80786d7f40 WatchSource:0}: Error finding container fd04cb53580bae7133c28c0d5c0b6f179fc802baa4f2a9e051dcdc80786d7f40: Status 404 returned error can't find the container with id fd04cb53580bae7133c28c0d5c0b6f179fc802baa4f2a9e051dcdc80786d7f40 Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.797566 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovn-acl-logging/0.log" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.798755 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8qjvm_2cd76211-e203-4b5b-98b0-102d3d67315d/ovn-controller/0.log" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.799136 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" event={"ID":"2cd76211-e203-4b5b-98b0-102d3d67315d","Type":"ContainerDied","Data":"06016df91adfb0b0b32700c0adce7633f10b47b7fe433ea9723bc95643761839"} Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.799194 4801 scope.go:117] "RemoveContainer" containerID="c0d94a9b76a1f23733f58d1d54b46267232b326ffef3a6f088ad1070affb1cfa" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.799248 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8qjvm" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.802474 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4gxwt_9695c5a7-610b-4c76-aa6f-b4f06f20823e/kube-multus/2.log" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.802531 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4gxwt" event={"ID":"9695c5a7-610b-4c76-aa6f-b4f06f20823e","Type":"ContainerStarted","Data":"455218153b72028d3824ceaf0da56e0c65b03afc586d7c1ba9ba722b8d6764e2"} Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.805860 4801 generic.go:334] "Generic (PLEG): container finished" podID="91f574c5-ec26-4bc5-b784-81b899dbf5bc" containerID="1ad81c244140d1e42a2adf18c19e2927db817eba2297ac7119efaff762487c19" exitCode=0 Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.805921 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerDied","Data":"1ad81c244140d1e42a2adf18c19e2927db817eba2297ac7119efaff762487c19"} Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.805943 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"fd04cb53580bae7133c28c0d5c0b6f179fc802baa4f2a9e051dcdc80786d7f40"} Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.808635 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" event={"ID":"01b87268-3f8a-4d05-84bb-2c19e182dfb3","Type":"ContainerStarted","Data":"4a9a42c041c323e91b674662ad86725aefae62da1ef5fc7bc94c4b81082124f0"} Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.810907 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d5sqp" event={"ID":"d493aca4-f8ca-4a5d-8f13-2776e232fb01","Type":"ContainerStarted","Data":"dae0e554b027c4698c9cf432730a469948d8150a221a176d38fc60b969a1075d"} Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.830109 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8qjvm"] Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.835557 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8qjvm"] Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.865226 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-d5sqp" podStartSLOduration=2.217530211 podStartE2EDuration="13.865204067s" podCreationTimestamp="2025-12-06 03:17:52 +0000 UTC" firstStartedPulling="2025-12-06 03:17:53.371682422 +0000 UTC m=+726.494289994" lastFinishedPulling="2025-12-06 03:18:05.019356248 +0000 UTC m=+738.141963850" observedRunningTime="2025-12-06 03:18:05.861960373 +0000 UTC m=+738.984567935" watchObservedRunningTime="2025-12-06 03:18:05.865204067 +0000 UTC m=+738.987811649" Dec 06 03:18:05 crc kubenswrapper[4801]: I1206 03:18:05.885691 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-464bh" podStartSLOduration=2.31601926 podStartE2EDuration="13.885652603s" podCreationTimestamp="2025-12-06 03:17:52 +0000 UTC" firstStartedPulling="2025-12-06 03:17:53.357930211 +0000 UTC m=+726.480537773" lastFinishedPulling="2025-12-06 03:18:04.927563544 +0000 UTC m=+738.050171116" observedRunningTime="2025-12-06 03:18:05.876598456 +0000 UTC m=+738.999206028" watchObservedRunningTime="2025-12-06 03:18:05.885652603 +0000 UTC m=+739.008260195" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.026574 4801 scope.go:117] "RemoveContainer" containerID="db37fb08bef781e0473979a7a9b7039d2948854e64f81447ddb42efff0640bad" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.188241 4801 scope.go:117] "RemoveContainer" containerID="e547a314c5b5c653a96a16d8a2cb1793c071cb80f622caa3268b86de0cd77f09" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.204544 4801 scope.go:117] "RemoveContainer" containerID="845e8d2704342338769b9fb034a41e43a6901b252ee0cdc23751dfb6a3124c6c" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.244014 4801 scope.go:117] "RemoveContainer" containerID="73a1066ac1f54179afc9af7d9fdbc1a512fd1f09d9fe4526a2cdd98d1ced11a1" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.262746 4801 scope.go:117] "RemoveContainer" containerID="0840581f1ae9029e240a9dac5a80474c226fd18d1e6693804c47b26234a1c8c2" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.282564 4801 scope.go:117] "RemoveContainer" containerID="64c5fd8a484b1e0e02e5adb60e463bbd6d461ecce0273bdccc3199d89bb33d76" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.321660 4801 scope.go:117] "RemoveContainer" containerID="31189a3756e838af25a831536443f85374995405a74df5afe4071f716d7f4029" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.342024 4801 scope.go:117] "RemoveContainer" containerID="10fd640658d1e92ea9a0a23f9575343743bae63fd74495f4ed2313111471e35b" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.823678 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"fa556587851784b769189b907c1153e771ada3eaea9ba5cc581917c3be158e16"} Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.824144 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"283c830daa1f1b3d2470d702c67989c6779bf4e98b57d7ef01fa7c6c76bcf846"} Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.824163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"a327746d1d67d3f0d7f0f25bf34fb9c0234e3d0db4dc094fe86d7859dfd958b4"} Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.824176 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"f1dd688d8c62b79f2c961db9663084a3f0308518f0e0f69da7398cc512e3eeb7"} Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.829554 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" event={"ID":"0ff9519f-3a68-4d4f-9a7b-b09282b3db2b","Type":"ContainerStarted","Data":"ee87968f32f23dc88edbd51edcaf540a0292b977908f034a7c4972069e07c421"} Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.830357 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:18:06 crc kubenswrapper[4801]: I1206 03:18:06.857688 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" podStartSLOduration=2.007835456 podStartE2EDuration="14.857664708s" podCreationTimestamp="2025-12-06 03:17:52 +0000 UTC" firstStartedPulling="2025-12-06 03:17:53.413763823 +0000 UTC m=+726.536371395" lastFinishedPulling="2025-12-06 03:18:06.263593075 +0000 UTC m=+739.386200647" observedRunningTime="2025-12-06 03:18:06.853478398 +0000 UTC m=+739.976085970" watchObservedRunningTime="2025-12-06 03:18:06.857664708 +0000 UTC m=+739.980272280" Dec 06 03:18:07 crc kubenswrapper[4801]: I1206 03:18:07.219632 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd76211-e203-4b5b-98b0-102d3d67315d" path="/var/lib/kubelet/pods/2cd76211-e203-4b5b-98b0-102d3d67315d/volumes" Dec 06 03:18:07 crc kubenswrapper[4801]: I1206 03:18:07.843267 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"16e4fccc827b68637d9613cb61733c5b9681abf78e2459397b7aa80da81b8eb9"} Dec 06 03:18:07 crc kubenswrapper[4801]: I1206 03:18:07.843895 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"307ae5e0cbbf107e902f5c94d2c814acdcaabcfe887d32201bcbf13a40dbe2e2"} Dec 06 03:18:09 crc kubenswrapper[4801]: I1206 03:18:09.896922 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"8aa21a7d2ba829733bcb38273ee29d5f93b3ad75dde8519d5567df049b8a9a46"} Dec 06 03:18:11 crc kubenswrapper[4801]: I1206 03:18:11.169995 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:18:11 crc kubenswrapper[4801]: I1206 03:18:11.170522 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.160494 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8cc5" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.930051 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" event={"ID":"91f574c5-ec26-4bc5-b784-81b899dbf5bc","Type":"ContainerStarted","Data":"8e0666c17a89c3219f7a0007131132e0a021d0ef841f764ad3892bca857ec462"} Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.930505 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.930652 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.930673 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.964685 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.965996 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" podStartSLOduration=8.965981308 podStartE2EDuration="8.965981308s" podCreationTimestamp="2025-12-06 03:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:18:13.964218252 +0000 UTC m=+747.086825814" watchObservedRunningTime="2025-12-06 03:18:13.965981308 +0000 UTC m=+747.088588880" Dec 06 03:18:13 crc kubenswrapper[4801]: I1206 03:18:13.973527 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:31 crc kubenswrapper[4801]: I1206 03:18:31.194060 4801 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 03:18:35 crc kubenswrapper[4801]: I1206 03:18:35.459483 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-lw9bh" Dec 06 03:18:41 crc kubenswrapper[4801]: I1206 03:18:41.170205 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:18:41 crc kubenswrapper[4801]: I1206 03:18:41.170670 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.808801 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj"] Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.810560 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.813150 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.822647 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj"] Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.894119 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.894323 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsgn\" (UniqueName: \"kubernetes.io/projected/ca37372c-a05f-4ff7-ace6-3f33ad23b959-kube-api-access-bvsgn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.894424 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.995651 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.995709 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsgn\" (UniqueName: \"kubernetes.io/projected/ca37372c-a05f-4ff7-ace6-3f33ad23b959-kube-api-access-bvsgn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.995764 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.996203 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:53 crc kubenswrapper[4801]: I1206 03:18:53.996294 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:54 crc kubenswrapper[4801]: I1206 03:18:54.015413 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsgn\" (UniqueName: \"kubernetes.io/projected/ca37372c-a05f-4ff7-ace6-3f33ad23b959-kube-api-access-bvsgn\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:54 crc kubenswrapper[4801]: I1206 03:18:54.186939 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:18:54 crc kubenswrapper[4801]: I1206 03:18:54.595943 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj"] Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.180984 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" event={"ID":"ca37372c-a05f-4ff7-ace6-3f33ad23b959","Type":"ContainerStarted","Data":"f1fb879b3ff94854ee4c5f76ff46676f7bb168c77806a205691b8b8d24d74d70"} Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.743655 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ghnhj"] Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.745362 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.757132 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghnhj"] Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.817194 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-utilities\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.817292 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhg7\" (UniqueName: \"kubernetes.io/projected/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-kube-api-access-hxhg7\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.817332 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-catalog-content\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.918307 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-utilities\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.918387 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhg7\" (UniqueName: \"kubernetes.io/projected/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-kube-api-access-hxhg7\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.918412 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-catalog-content\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.919063 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-catalog-content\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.919223 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-utilities\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:55 crc kubenswrapper[4801]: I1206 03:18:55.953035 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhg7\" (UniqueName: \"kubernetes.io/projected/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-kube-api-access-hxhg7\") pod \"redhat-operators-ghnhj\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:56 crc kubenswrapper[4801]: I1206 03:18:56.064022 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:18:56 crc kubenswrapper[4801]: I1206 03:18:56.186978 4801 generic.go:334] "Generic (PLEG): container finished" podID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerID="14bf5e263a2b0c857340a9777c95de47e701a62b53d8401cbc09b163eef52456" exitCode=0 Dec 06 03:18:56 crc kubenswrapper[4801]: I1206 03:18:56.187018 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" event={"ID":"ca37372c-a05f-4ff7-ace6-3f33ad23b959","Type":"ContainerDied","Data":"14bf5e263a2b0c857340a9777c95de47e701a62b53d8401cbc09b163eef52456"} Dec 06 03:18:56 crc kubenswrapper[4801]: I1206 03:18:56.328438 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ghnhj"] Dec 06 03:18:57 crc kubenswrapper[4801]: I1206 03:18:57.192905 4801 generic.go:334] "Generic (PLEG): container finished" podID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerID="78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64" exitCode=0 Dec 06 03:18:57 crc kubenswrapper[4801]: I1206 03:18:57.192955 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerDied","Data":"78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64"} Dec 06 03:18:57 crc kubenswrapper[4801]: I1206 03:18:57.193044 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerStarted","Data":"b402a46a00fdc63567ddb09afec7a24dade4cad999d3f624e5095790e9c34893"} Dec 06 03:18:58 crc kubenswrapper[4801]: I1206 03:18:58.201774 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerStarted","Data":"aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76"} Dec 06 03:18:58 crc kubenswrapper[4801]: I1206 03:18:58.204198 4801 generic.go:334] "Generic (PLEG): container finished" podID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerID="2911abfd2c43402502b7699ea9de14d1a4dc9e63b9e5e17b124f352282fb1a2c" exitCode=0 Dec 06 03:18:58 crc kubenswrapper[4801]: I1206 03:18:58.204278 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" event={"ID":"ca37372c-a05f-4ff7-ace6-3f33ad23b959","Type":"ContainerDied","Data":"2911abfd2c43402502b7699ea9de14d1a4dc9e63b9e5e17b124f352282fb1a2c"} Dec 06 03:18:59 crc kubenswrapper[4801]: I1206 03:18:59.211639 4801 generic.go:334] "Generic (PLEG): container finished" podID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerID="aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76" exitCode=0 Dec 06 03:18:59 crc kubenswrapper[4801]: I1206 03:18:59.217681 4801 generic.go:334] "Generic (PLEG): container finished" podID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerID="d58ff2d98645f4a0d5d7aadfbfefd5d39e1015865e824f7697e975876f130aaf" exitCode=0 Dec 06 03:18:59 crc kubenswrapper[4801]: I1206 03:18:59.226052 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerDied","Data":"aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76"} Dec 06 03:18:59 crc kubenswrapper[4801]: I1206 03:18:59.226197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" event={"ID":"ca37372c-a05f-4ff7-ace6-3f33ad23b959","Type":"ContainerDied","Data":"d58ff2d98645f4a0d5d7aadfbfefd5d39e1015865e824f7697e975876f130aaf"} Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.223820 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerStarted","Data":"e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303"} Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.248368 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ghnhj" podStartSLOduration=2.795392858 podStartE2EDuration="5.248348234s" podCreationTimestamp="2025-12-06 03:18:55 +0000 UTC" firstStartedPulling="2025-12-06 03:18:57.194655234 +0000 UTC m=+790.317262806" lastFinishedPulling="2025-12-06 03:18:59.6476106 +0000 UTC m=+792.770218182" observedRunningTime="2025-12-06 03:19:00.245889219 +0000 UTC m=+793.368496781" watchObservedRunningTime="2025-12-06 03:19:00.248348234 +0000 UTC m=+793.370955806" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.435494 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.487730 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-util\") pod \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.487814 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-bundle\") pod \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.487842 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvsgn\" (UniqueName: \"kubernetes.io/projected/ca37372c-a05f-4ff7-ace6-3f33ad23b959-kube-api-access-bvsgn\") pod \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\" (UID: \"ca37372c-a05f-4ff7-ace6-3f33ad23b959\") " Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.488448 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-bundle" (OuterVolumeSpecName: "bundle") pod "ca37372c-a05f-4ff7-ace6-3f33ad23b959" (UID: "ca37372c-a05f-4ff7-ace6-3f33ad23b959"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.496025 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca37372c-a05f-4ff7-ace6-3f33ad23b959-kube-api-access-bvsgn" (OuterVolumeSpecName: "kube-api-access-bvsgn") pod "ca37372c-a05f-4ff7-ace6-3f33ad23b959" (UID: "ca37372c-a05f-4ff7-ace6-3f33ad23b959"). InnerVolumeSpecName "kube-api-access-bvsgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.501286 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-util" (OuterVolumeSpecName: "util") pod "ca37372c-a05f-4ff7-ace6-3f33ad23b959" (UID: "ca37372c-a05f-4ff7-ace6-3f33ad23b959"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.589085 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-util\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.589121 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ca37372c-a05f-4ff7-ace6-3f33ad23b959-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:00 crc kubenswrapper[4801]: I1206 03:19:00.589132 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvsgn\" (UniqueName: \"kubernetes.io/projected/ca37372c-a05f-4ff7-ace6-3f33ad23b959-kube-api-access-bvsgn\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:01 crc kubenswrapper[4801]: I1206 03:19:01.234916 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" event={"ID":"ca37372c-a05f-4ff7-ace6-3f33ad23b959","Type":"ContainerDied","Data":"f1fb879b3ff94854ee4c5f76ff46676f7bb168c77806a205691b8b8d24d74d70"} Dec 06 03:19:01 crc kubenswrapper[4801]: I1206 03:19:01.235023 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj" Dec 06 03:19:01 crc kubenswrapper[4801]: I1206 03:19:01.235099 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1fb879b3ff94854ee4c5f76ff46676f7bb168c77806a205691b8b8d24d74d70" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.182585 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q"] Dec 06 03:19:05 crc kubenswrapper[4801]: E1206 03:19:05.183086 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="util" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.183097 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="util" Dec 06 03:19:05 crc kubenswrapper[4801]: E1206 03:19:05.183110 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="extract" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.183116 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="extract" Dec 06 03:19:05 crc kubenswrapper[4801]: E1206 03:19:05.183125 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="pull" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.183131 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="pull" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.183222 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca37372c-a05f-4ff7-ace6-3f33ad23b959" containerName="extract" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.183601 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.186340 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.186701 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-jxx5p" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.186804 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.195175 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q"] Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.257096 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgwg\" (UniqueName: \"kubernetes.io/projected/a1550633-c2b8-4a89-a028-b6960c2f3bf9-kube-api-access-rsgwg\") pod \"nmstate-operator-5b5b58f5c8-5989q\" (UID: \"a1550633-c2b8-4a89-a028-b6960c2f3bf9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.360226 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsgwg\" (UniqueName: \"kubernetes.io/projected/a1550633-c2b8-4a89-a028-b6960c2f3bf9-kube-api-access-rsgwg\") pod \"nmstate-operator-5b5b58f5c8-5989q\" (UID: \"a1550633-c2b8-4a89-a028-b6960c2f3bf9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.379988 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsgwg\" (UniqueName: \"kubernetes.io/projected/a1550633-c2b8-4a89-a028-b6960c2f3bf9-kube-api-access-rsgwg\") pod \"nmstate-operator-5b5b58f5c8-5989q\" (UID: \"a1550633-c2b8-4a89-a028-b6960c2f3bf9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.502653 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" Dec 06 03:19:05 crc kubenswrapper[4801]: I1206 03:19:05.696513 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q"] Dec 06 03:19:06 crc kubenswrapper[4801]: I1206 03:19:06.064264 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:19:06 crc kubenswrapper[4801]: I1206 03:19:06.064332 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:19:06 crc kubenswrapper[4801]: I1206 03:19:06.115325 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:19:06 crc kubenswrapper[4801]: I1206 03:19:06.261225 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" event={"ID":"a1550633-c2b8-4a89-a028-b6960c2f3bf9","Type":"ContainerStarted","Data":"0f087ff5a6d13a47755fb0e5f24317db5c8ecc3b3f4f94a53a3c28f2e4830fae"} Dec 06 03:19:06 crc kubenswrapper[4801]: I1206 03:19:06.315459 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:19:08 crc kubenswrapper[4801]: I1206 03:19:08.532626 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghnhj"] Dec 06 03:19:08 crc kubenswrapper[4801]: I1206 03:19:08.533223 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ghnhj" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="registry-server" containerID="cri-o://e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303" gracePeriod=2 Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.131421 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.140593 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-catalog-content\") pod \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.140622 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-utilities\") pod \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.140643 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxhg7\" (UniqueName: \"kubernetes.io/projected/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-kube-api-access-hxhg7\") pod \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\" (UID: \"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a\") " Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.141803 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-utilities" (OuterVolumeSpecName: "utilities") pod "3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" (UID: "3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.151114 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-kube-api-access-hxhg7" (OuterVolumeSpecName: "kube-api-access-hxhg7") pod "3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" (UID: "3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a"). InnerVolumeSpecName "kube-api-access-hxhg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.169631 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.169686 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.169729 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.170348 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be334762e587af043257db835d7f2dd94a8df53291dd8b1402e414ca26dd1b3c"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.170410 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://be334762e587af043257db835d7f2dd94a8df53291dd8b1402e414ca26dd1b3c" gracePeriod=600 Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.237697 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" (UID: "3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.241604 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.241646 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.241674 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxhg7\" (UniqueName: \"kubernetes.io/projected/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a-kube-api-access-hxhg7\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.300025 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="be334762e587af043257db835d7f2dd94a8df53291dd8b1402e414ca26dd1b3c" exitCode=0 Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.300077 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"be334762e587af043257db835d7f2dd94a8df53291dd8b1402e414ca26dd1b3c"} Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.300129 4801 scope.go:117] "RemoveContainer" containerID="ef748f9e791af1c653dbbe7c635d2ae63238600b789d7bc247b50a8e9c125baf" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.302656 4801 generic.go:334] "Generic (PLEG): container finished" podID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerID="e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303" exitCode=0 Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.302696 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerDied","Data":"e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303"} Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.302704 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ghnhj" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.302716 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ghnhj" event={"ID":"3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a","Type":"ContainerDied","Data":"b402a46a00fdc63567ddb09afec7a24dade4cad999d3f624e5095790e9c34893"} Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.337234 4801 scope.go:117] "RemoveContainer" containerID="e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.337930 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ghnhj"] Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.341246 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ghnhj"] Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.352490 4801 scope.go:117] "RemoveContainer" containerID="aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.379390 4801 scope.go:117] "RemoveContainer" containerID="78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.402697 4801 scope.go:117] "RemoveContainer" containerID="e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303" Dec 06 03:19:11 crc kubenswrapper[4801]: E1206 03:19:11.403352 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303\": container with ID starting with e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303 not found: ID does not exist" containerID="e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.403391 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303"} err="failed to get container status \"e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303\": rpc error: code = NotFound desc = could not find container \"e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303\": container with ID starting with e0cec175a973b16dc67935bcb490dee113299acb6ee82fafc75ae552816a7303 not found: ID does not exist" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.403462 4801 scope.go:117] "RemoveContainer" containerID="aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76" Dec 06 03:19:11 crc kubenswrapper[4801]: E1206 03:19:11.403985 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76\": container with ID starting with aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76 not found: ID does not exist" containerID="aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.404052 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76"} err="failed to get container status \"aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76\": rpc error: code = NotFound desc = could not find container \"aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76\": container with ID starting with aae9c6b170179a98cd364d2f923b86e99a5022ddfce3836610b83742a3837e76 not found: ID does not exist" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.404098 4801 scope.go:117] "RemoveContainer" containerID="78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64" Dec 06 03:19:11 crc kubenswrapper[4801]: E1206 03:19:11.404498 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64\": container with ID starting with 78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64 not found: ID does not exist" containerID="78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64" Dec 06 03:19:11 crc kubenswrapper[4801]: I1206 03:19:11.404533 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64"} err="failed to get container status \"78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64\": rpc error: code = NotFound desc = could not find container \"78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64\": container with ID starting with 78bef7d6e8f92904a16ac4ad882230f50043ff99218fba8b68fb2c5645c24e64 not found: ID does not exist" Dec 06 03:19:12 crc kubenswrapper[4801]: I1206 03:19:12.312543 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" event={"ID":"a1550633-c2b8-4a89-a028-b6960c2f3bf9","Type":"ContainerStarted","Data":"d58354f487ead3ef2415c6821d2e2155ef1d037ec066b67ccfaf1dc31f2aab4c"} Dec 06 03:19:12 crc kubenswrapper[4801]: I1206 03:19:12.314826 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"1bc0ec1db27713faa2819e59d2236a16fed1ad4e4c8174b604a5bb2c54258d36"} Dec 06 03:19:12 crc kubenswrapper[4801]: I1206 03:19:12.343031 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-5989q" podStartSLOduration=1.8867597520000001 podStartE2EDuration="7.342997184s" podCreationTimestamp="2025-12-06 03:19:05 +0000 UTC" firstStartedPulling="2025-12-06 03:19:05.706915608 +0000 UTC m=+798.829523180" lastFinishedPulling="2025-12-06 03:19:11.16315304 +0000 UTC m=+804.285760612" observedRunningTime="2025-12-06 03:19:12.337389986 +0000 UTC m=+805.459997618" watchObservedRunningTime="2025-12-06 03:19:12.342997184 +0000 UTC m=+805.465604796" Dec 06 03:19:13 crc kubenswrapper[4801]: I1206 03:19:13.218413 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" path="/var/lib/kubelet/pods/3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a/volumes" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.029386 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq"] Dec 06 03:19:14 crc kubenswrapper[4801]: E1206 03:19:14.029932 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="registry-server" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.029945 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="registry-server" Dec 06 03:19:14 crc kubenswrapper[4801]: E1206 03:19:14.029966 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="extract-content" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.029972 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="extract-content" Dec 06 03:19:14 crc kubenswrapper[4801]: E1206 03:19:14.029983 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="extract-utilities" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.029991 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="extract-utilities" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.030085 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3654cfd1-fbc6-4fdb-81dd-9e2a4b8ab47a" containerName="registry-server" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.030614 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.034206 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-b7lxl" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.051169 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.057106 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.057967 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.062397 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.073675 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4j999"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.074315 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078651 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w64d\" (UniqueName: \"kubernetes.io/projected/4c393f31-868e-4b98-af1a-9dd74f31888c-kube-api-access-6w64d\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078698 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-dt4ch\" (UID: \"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078747 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-nmstate-lock\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078795 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-ovs-socket\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078833 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-dbus-socket\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078866 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54mf4\" (UniqueName: \"kubernetes.io/projected/d6214601-7874-4ac4-bb5f-1743be25951e-kube-api-access-54mf4\") pod \"nmstate-metrics-7f946cbc9-g7gfq\" (UID: \"d6214601-7874-4ac4-bb5f-1743be25951e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.078894 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpts8\" (UniqueName: \"kubernetes.io/projected/5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243-kube-api-access-fpts8\") pod \"nmstate-webhook-5f6d4c5ccb-dt4ch\" (UID: \"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.104554 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.179820 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54mf4\" (UniqueName: \"kubernetes.io/projected/d6214601-7874-4ac4-bb5f-1743be25951e-kube-api-access-54mf4\") pod \"nmstate-metrics-7f946cbc9-g7gfq\" (UID: \"d6214601-7874-4ac4-bb5f-1743be25951e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.179866 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpts8\" (UniqueName: \"kubernetes.io/projected/5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243-kube-api-access-fpts8\") pod \"nmstate-webhook-5f6d4c5ccb-dt4ch\" (UID: \"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.179933 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-dt4ch\" (UID: \"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.180100 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w64d\" (UniqueName: \"kubernetes.io/projected/4c393f31-868e-4b98-af1a-9dd74f31888c-kube-api-access-6w64d\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.183375 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-nmstate-lock\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.183416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-ovs-socket\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.183455 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-dbus-socket\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.183692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-dbus-socket\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.183740 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-nmstate-lock\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.183793 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/4c393f31-868e-4b98-af1a-9dd74f31888c-ovs-socket\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.195839 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-dt4ch\" (UID: \"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.210724 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.214078 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpts8\" (UniqueName: \"kubernetes.io/projected/5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243-kube-api-access-fpts8\") pod \"nmstate-webhook-5f6d4c5ccb-dt4ch\" (UID: \"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.214429 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w64d\" (UniqueName: \"kubernetes.io/projected/4c393f31-868e-4b98-af1a-9dd74f31888c-kube-api-access-6w64d\") pod \"nmstate-handler-4j999\" (UID: \"4c393f31-868e-4b98-af1a-9dd74f31888c\") " pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.214483 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.218211 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54mf4\" (UniqueName: \"kubernetes.io/projected/d6214601-7874-4ac4-bb5f-1743be25951e-kube-api-access-54mf4\") pod \"nmstate-metrics-7f946cbc9-g7gfq\" (UID: \"d6214601-7874-4ac4-bb5f-1743be25951e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.220258 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-czgwh" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.220533 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.221237 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.224932 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.284498 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.284568 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.284598 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkksv\" (UniqueName: \"kubernetes.io/projected/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-kube-api-access-mkksv\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.350817 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.376032 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.385526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkksv\" (UniqueName: \"kubernetes.io/projected/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-kube-api-access-mkksv\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.385614 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.385636 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: E1206 03:19:14.385720 4801 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 06 03:19:14 crc kubenswrapper[4801]: E1206 03:19:14.385778 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-plugin-serving-cert podName:ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5 nodeName:}" failed. No retries permitted until 2025-12-06 03:19:14.885763378 +0000 UTC m=+808.008370950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-26q4j" (UID: "ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5") : secret "plugin-serving-cert" not found Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.386552 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.394564 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.404468 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkksv\" (UniqueName: \"kubernetes.io/projected/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-kube-api-access-mkksv\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.895779 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.912545 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68954d78db-r28rh"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.913795 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.928700 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-26q4j\" (UID: \"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.935894 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68954d78db-r28rh"] Dec 06 03:19:14 crc kubenswrapper[4801]: I1206 03:19:14.995149 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq"] Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997435 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-oauth-serving-cert\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997474 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-trusted-ca-bundle\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997503 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-serving-cert\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997521 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-config\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-service-ca\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997568 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-oauth-config\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:14.997599 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqdtn\" (UniqueName: \"kubernetes.io/projected/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-kube-api-access-jqdtn\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: W1206 03:19:15.009048 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6214601_7874_4ac4_bb5f_1743be25951e.slice/crio-9ed93b383bf75674bdd3c8ef92209512a9a92d356141ce41c45b211f226d563e WatchSource:0}: Error finding container 9ed93b383bf75674bdd3c8ef92209512a9a92d356141ce41c45b211f226d563e: Status 404 returned error can't find the container with id 9ed93b383bf75674bdd3c8ef92209512a9a92d356141ce41c45b211f226d563e Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098330 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-service-ca\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098382 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-oauth-config\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098419 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqdtn\" (UniqueName: \"kubernetes.io/projected/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-kube-api-access-jqdtn\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098457 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-oauth-serving-cert\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098474 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-trusted-ca-bundle\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098505 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-serving-cert\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.098525 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-config\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.100086 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-config\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.100219 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-oauth-serving-cert\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.100445 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-service-ca\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.101248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-trusted-ca-bundle\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.102743 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-serving-cert\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.102972 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-console-oauth-config\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.114702 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqdtn\" (UniqueName: \"kubernetes.io/projected/a4e6ec10-5ae9-4405-835f-bcdc323b96ee-kube-api-access-jqdtn\") pod \"console-68954d78db-r28rh\" (UID: \"a4e6ec10-5ae9-4405-835f-bcdc323b96ee\") " pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.149870 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.271260 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.304136 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch"] Dec 06 03:19:15 crc kubenswrapper[4801]: W1206 03:19:15.313080 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c07b12c_9dad_4c3e_a31a_bf2d8c0c8243.slice/crio-c892f89852cddeee51bf4881df8da3d0597123ee057b59ed5b1e9af53e5e699c WatchSource:0}: Error finding container c892f89852cddeee51bf4881df8da3d0597123ee057b59ed5b1e9af53e5e699c: Status 404 returned error can't find the container with id c892f89852cddeee51bf4881df8da3d0597123ee057b59ed5b1e9af53e5e699c Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.335560 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" event={"ID":"d6214601-7874-4ac4-bb5f-1743be25951e","Type":"ContainerStarted","Data":"9ed93b383bf75674bdd3c8ef92209512a9a92d356141ce41c45b211f226d563e"} Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.336930 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4j999" event={"ID":"4c393f31-868e-4b98-af1a-9dd74f31888c","Type":"ContainerStarted","Data":"42afea6ddf084a07b8f2f9eaff29e7cf03b8eaf04758f2c8a15a9bd088c6c00a"} Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.338579 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" event={"ID":"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243","Type":"ContainerStarted","Data":"c892f89852cddeee51bf4881df8da3d0597123ee057b59ed5b1e9af53e5e699c"} Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.360737 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j"] Dec 06 03:19:15 crc kubenswrapper[4801]: I1206 03:19:15.484753 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68954d78db-r28rh"] Dec 06 03:19:16 crc kubenswrapper[4801]: I1206 03:19:16.346709 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68954d78db-r28rh" event={"ID":"a4e6ec10-5ae9-4405-835f-bcdc323b96ee","Type":"ContainerStarted","Data":"adbf3e9405569cf62ff635aa15ae7e406b8cd45f4261b68328f564606b2fca10"} Dec 06 03:19:16 crc kubenswrapper[4801]: I1206 03:19:16.347988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" event={"ID":"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5","Type":"ContainerStarted","Data":"75f8381594d5ff7bea5597dc0b9e4667901e415a1201b464cf61c261f26f7255"} Dec 06 03:19:17 crc kubenswrapper[4801]: I1206 03:19:17.355786 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68954d78db-r28rh" event={"ID":"a4e6ec10-5ae9-4405-835f-bcdc323b96ee","Type":"ContainerStarted","Data":"2d1467a0db4500d85e878040647ea214dea77605a91a5ce0140e5d19b6a711b4"} Dec 06 03:19:17 crc kubenswrapper[4801]: I1206 03:19:17.377160 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68954d78db-r28rh" podStartSLOduration=3.377135729 podStartE2EDuration="3.377135729s" podCreationTimestamp="2025-12-06 03:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:19:17.373089221 +0000 UTC m=+810.495696803" watchObservedRunningTime="2025-12-06 03:19:17.377135729 +0000 UTC m=+810.499743301" Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.369689 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" event={"ID":"d6214601-7874-4ac4-bb5f-1743be25951e","Type":"ContainerStarted","Data":"4a5b5e28f936d4ca3372934b9c55d0bf37085f3d9a03d5c47ef436397f40b878"} Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.372111 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4j999" event={"ID":"4c393f31-868e-4b98-af1a-9dd74f31888c","Type":"ContainerStarted","Data":"c71ce3daf159705d59547bc2c4dc527317d70c14ec8bba7a004ab9f77a1a3f12"} Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.372231 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.374170 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" event={"ID":"5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243","Type":"ContainerStarted","Data":"5ba8a21c143975cb14126458711b5016b736c90f36d2381b713ace09fd64b15b"} Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.374263 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.375897 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" event={"ID":"ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5","Type":"ContainerStarted","Data":"5dd89a7ad75f979106230233295de8ad46ff092c19bde701dccd546e3d12dd3a"} Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.393401 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4j999" podStartSLOduration=1.909943723 podStartE2EDuration="5.39333988s" podCreationTimestamp="2025-12-06 03:19:14 +0000 UTC" firstStartedPulling="2025-12-06 03:19:14.91008624 +0000 UTC m=+808.032693812" lastFinishedPulling="2025-12-06 03:19:18.393482397 +0000 UTC m=+811.516089969" observedRunningTime="2025-12-06 03:19:19.389451767 +0000 UTC m=+812.512059359" watchObservedRunningTime="2025-12-06 03:19:19.39333988 +0000 UTC m=+812.515947472" Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.413053 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" podStartSLOduration=2.341366126 podStartE2EDuration="5.413027991s" podCreationTimestamp="2025-12-06 03:19:14 +0000 UTC" firstStartedPulling="2025-12-06 03:19:15.316483961 +0000 UTC m=+808.439091543" lastFinishedPulling="2025-12-06 03:19:18.388145836 +0000 UTC m=+811.510753408" observedRunningTime="2025-12-06 03:19:19.408109961 +0000 UTC m=+812.530717533" watchObservedRunningTime="2025-12-06 03:19:19.413027991 +0000 UTC m=+812.535635583" Dec 06 03:19:19 crc kubenswrapper[4801]: I1206 03:19:19.424492 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-26q4j" podStartSLOduration=2.443610032 podStartE2EDuration="5.424470653s" podCreationTimestamp="2025-12-06 03:19:14 +0000 UTC" firstStartedPulling="2025-12-06 03:19:15.396727265 +0000 UTC m=+808.519334837" lastFinishedPulling="2025-12-06 03:19:18.377587886 +0000 UTC m=+811.500195458" observedRunningTime="2025-12-06 03:19:19.420831168 +0000 UTC m=+812.543438750" watchObservedRunningTime="2025-12-06 03:19:19.424470653 +0000 UTC m=+812.547078235" Dec 06 03:19:22 crc kubenswrapper[4801]: I1206 03:19:22.404210 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" event={"ID":"d6214601-7874-4ac4-bb5f-1743be25951e","Type":"ContainerStarted","Data":"137029b95a279447884135c7e86709c24e9c1bc49f07d9449231416111e461b7"} Dec 06 03:19:22 crc kubenswrapper[4801]: I1206 03:19:22.421697 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-g7gfq" podStartSLOduration=1.236064494 podStartE2EDuration="8.421682069s" podCreationTimestamp="2025-12-06 03:19:14 +0000 UTC" firstStartedPulling="2025-12-06 03:19:15.011589635 +0000 UTC m=+808.134197207" lastFinishedPulling="2025-12-06 03:19:22.19720719 +0000 UTC m=+815.319814782" observedRunningTime="2025-12-06 03:19:22.418102583 +0000 UTC m=+815.540710155" watchObservedRunningTime="2025-12-06 03:19:22.421682069 +0000 UTC m=+815.544289641" Dec 06 03:19:24 crc kubenswrapper[4801]: I1206 03:19:24.416664 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4j999" Dec 06 03:19:25 crc kubenswrapper[4801]: I1206 03:19:25.271662 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:25 crc kubenswrapper[4801]: I1206 03:19:25.272040 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:25 crc kubenswrapper[4801]: I1206 03:19:25.278068 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:25 crc kubenswrapper[4801]: I1206 03:19:25.430457 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68954d78db-r28rh" Dec 06 03:19:25 crc kubenswrapper[4801]: I1206 03:19:25.498639 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qnr4c"] Dec 06 03:19:34 crc kubenswrapper[4801]: I1206 03:19:34.385459 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-dt4ch" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.712472 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7"] Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.714176 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.718269 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.740281 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7"] Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.881903 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.881980 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp46v\" (UniqueName: \"kubernetes.io/projected/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-kube-api-access-qp46v\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.882047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.983896 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.984028 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.984081 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp46v\" (UniqueName: \"kubernetes.io/projected/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-kube-api-access-qp46v\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.984444 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:48 crc kubenswrapper[4801]: I1206 03:19:48.984678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:49 crc kubenswrapper[4801]: I1206 03:19:49.009248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp46v\" (UniqueName: \"kubernetes.io/projected/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-kube-api-access-qp46v\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:49 crc kubenswrapper[4801]: I1206 03:19:49.039902 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:19:49 crc kubenswrapper[4801]: I1206 03:19:49.470422 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7"] Dec 06 03:19:49 crc kubenswrapper[4801]: W1206 03:19:49.478296 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod719f8bb0_bfb5_4867_9d5c_5dd56bd64bab.slice/crio-cb209afa48fda6594c9ea09cf16b5c7fcb8890fca1a4828412905d09f01f9503 WatchSource:0}: Error finding container cb209afa48fda6594c9ea09cf16b5c7fcb8890fca1a4828412905d09f01f9503: Status 404 returned error can't find the container with id cb209afa48fda6594c9ea09cf16b5c7fcb8890fca1a4828412905d09f01f9503 Dec 06 03:19:49 crc kubenswrapper[4801]: I1206 03:19:49.622336 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" event={"ID":"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab","Type":"ContainerStarted","Data":"b1b1709da00fc816b3d22198d5c0eaf910736edae5a82aca597278a2d15b1b20"} Dec 06 03:19:49 crc kubenswrapper[4801]: I1206 03:19:49.622378 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" event={"ID":"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab","Type":"ContainerStarted","Data":"cb209afa48fda6594c9ea09cf16b5c7fcb8890fca1a4828412905d09f01f9503"} Dec 06 03:19:50 crc kubenswrapper[4801]: I1206 03:19:50.544543 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qnr4c" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerName="console" containerID="cri-o://6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71" gracePeriod=15 Dec 06 03:19:50 crc kubenswrapper[4801]: I1206 03:19:50.629166 4801 generic.go:334] "Generic (PLEG): container finished" podID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerID="b1b1709da00fc816b3d22198d5c0eaf910736edae5a82aca597278a2d15b1b20" exitCode=0 Dec 06 03:19:50 crc kubenswrapper[4801]: I1206 03:19:50.629209 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" event={"ID":"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab","Type":"ContainerDied","Data":"b1b1709da00fc816b3d22198d5c0eaf910736edae5a82aca597278a2d15b1b20"} Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.363011 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qnr4c_4fac250c-7d1a-435f-a613-8c4646b7be9d/console/0.log" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.363381 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.525571 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-config\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.525692 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-serving-cert\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.525750 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-oauth-serving-cert\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.526238 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-trusted-ca-bundle\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.526302 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nw4n\" (UniqueName: \"kubernetes.io/projected/4fac250c-7d1a-435f-a613-8c4646b7be9d-kube-api-access-9nw4n\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.526351 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-service-ca\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.526408 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-oauth-config\") pod \"4fac250c-7d1a-435f-a613-8c4646b7be9d\" (UID: \"4fac250c-7d1a-435f-a613-8c4646b7be9d\") " Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.527415 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.529510 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-config" (OuterVolumeSpecName: "console-config") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.529864 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.530328 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-service-ca" (OuterVolumeSpecName: "service-ca") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.532815 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.534248 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fac250c-7d1a-435f-a613-8c4646b7be9d-kube-api-access-9nw4n" (OuterVolumeSpecName: "kube-api-access-9nw4n") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "kube-api-access-9nw4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.534789 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4fac250c-7d1a-435f-a613-8c4646b7be9d" (UID: "4fac250c-7d1a-435f-a613-8c4646b7be9d"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628322 4801 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628358 4801 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628367 4801 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4fac250c-7d1a-435f-a613-8c4646b7be9d-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628377 4801 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628386 4801 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628394 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nw4n\" (UniqueName: \"kubernetes.io/projected/4fac250c-7d1a-435f-a613-8c4646b7be9d-kube-api-access-9nw4n\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.628404 4801 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4fac250c-7d1a-435f-a613-8c4646b7be9d-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.635121 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qnr4c_4fac250c-7d1a-435f-a613-8c4646b7be9d/console/0.log" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.635184 4801 generic.go:334] "Generic (PLEG): container finished" podID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerID="6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71" exitCode=2 Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.635246 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnr4c" event={"ID":"4fac250c-7d1a-435f-a613-8c4646b7be9d","Type":"ContainerDied","Data":"6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71"} Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.635289 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qnr4c" event={"ID":"4fac250c-7d1a-435f-a613-8c4646b7be9d","Type":"ContainerDied","Data":"fbe134151129cd4ae5408beae35c33989a980d62dff12ccdfe04947cf1ead24c"} Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.635312 4801 scope.go:117] "RemoveContainer" containerID="6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.635359 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qnr4c" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.654260 4801 scope.go:117] "RemoveContainer" containerID="6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71" Dec 06 03:19:51 crc kubenswrapper[4801]: E1206 03:19:51.654712 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71\": container with ID starting with 6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71 not found: ID does not exist" containerID="6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.654864 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71"} err="failed to get container status \"6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71\": rpc error: code = NotFound desc = could not find container \"6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71\": container with ID starting with 6f8d5aef28c5b5f8040982604de4aece1e0d6377b21adbe308c0990a82ba1c71 not found: ID does not exist" Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.674339 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qnr4c"] Dec 06 03:19:51 crc kubenswrapper[4801]: I1206 03:19:51.680253 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qnr4c"] Dec 06 03:19:53 crc kubenswrapper[4801]: I1206 03:19:53.222578 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" path="/var/lib/kubelet/pods/4fac250c-7d1a-435f-a613-8c4646b7be9d/volumes" Dec 06 03:19:57 crc kubenswrapper[4801]: I1206 03:19:57.680563 4801 generic.go:334] "Generic (PLEG): container finished" podID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerID="33931420ea785f3d15f51a7523e49adaea16220851a510b97727cd659becf758" exitCode=0 Dec 06 03:19:57 crc kubenswrapper[4801]: I1206 03:19:57.680611 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" event={"ID":"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab","Type":"ContainerDied","Data":"33931420ea785f3d15f51a7523e49adaea16220851a510b97727cd659becf758"} Dec 06 03:19:58 crc kubenswrapper[4801]: I1206 03:19:58.690342 4801 generic.go:334] "Generic (PLEG): container finished" podID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerID="e039032abbac41571e96d35f9890cc4f706aed18cab6ddeaf90b71616632ed6a" exitCode=0 Dec 06 03:19:58 crc kubenswrapper[4801]: I1206 03:19:58.690450 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" event={"ID":"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab","Type":"ContainerDied","Data":"e039032abbac41571e96d35f9890cc4f706aed18cab6ddeaf90b71616632ed6a"} Dec 06 03:19:59 crc kubenswrapper[4801]: I1206 03:19:59.945423 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.136967 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp46v\" (UniqueName: \"kubernetes.io/projected/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-kube-api-access-qp46v\") pod \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.137043 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-util\") pod \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.137082 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-bundle\") pod \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\" (UID: \"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab\") " Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.138384 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-bundle" (OuterVolumeSpecName: "bundle") pod "719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" (UID: "719f8bb0-bfb5-4867-9d5c-5dd56bd64bab"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.141943 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-kube-api-access-qp46v" (OuterVolumeSpecName: "kube-api-access-qp46v") pod "719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" (UID: "719f8bb0-bfb5-4867-9d5c-5dd56bd64bab"). InnerVolumeSpecName "kube-api-access-qp46v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.147573 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-util" (OuterVolumeSpecName: "util") pod "719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" (UID: "719f8bb0-bfb5-4867-9d5c-5dd56bd64bab"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.238149 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.238187 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp46v\" (UniqueName: \"kubernetes.io/projected/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-kube-api-access-qp46v\") on node \"crc\" DevicePath \"\"" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.238201 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/719f8bb0-bfb5-4867-9d5c-5dd56bd64bab-util\") on node \"crc\" DevicePath \"\"" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.707159 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" event={"ID":"719f8bb0-bfb5-4867-9d5c-5dd56bd64bab","Type":"ContainerDied","Data":"cb209afa48fda6594c9ea09cf16b5c7fcb8890fca1a4828412905d09f01f9503"} Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.707219 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb209afa48fda6594c9ea09cf16b5c7fcb8890fca1a4828412905d09f01f9503" Dec 06 03:20:00 crc kubenswrapper[4801]: I1206 03:20:00.707253 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.919243 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg"] Dec 06 03:20:14 crc kubenswrapper[4801]: E1206 03:20:14.919941 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="util" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.919952 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="util" Dec 06 03:20:14 crc kubenswrapper[4801]: E1206 03:20:14.919964 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="extract" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.919970 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="extract" Dec 06 03:20:14 crc kubenswrapper[4801]: E1206 03:20:14.919981 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="pull" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.919987 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="pull" Dec 06 03:20:14 crc kubenswrapper[4801]: E1206 03:20:14.919996 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerName="console" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.920036 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerName="console" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.920153 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fac250c-7d1a-435f-a613-8c4646b7be9d" containerName="console" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.920168 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="719f8bb0-bfb5-4867-9d5c-5dd56bd64bab" containerName="extract" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.920543 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.927882 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.927922 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.928950 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.929120 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-l62gz" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.932099 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r42mr\" (UniqueName: \"kubernetes.io/projected/0db979e7-3ec7-4dea-a942-9311355d7dca-kube-api-access-r42mr\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.932154 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0db979e7-3ec7-4dea-a942-9311355d7dca-webhook-cert\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.932242 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0db979e7-3ec7-4dea-a942-9311355d7dca-apiservice-cert\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.934973 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 06 03:20:14 crc kubenswrapper[4801]: I1206 03:20:14.957654 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg"] Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.033257 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0db979e7-3ec7-4dea-a942-9311355d7dca-apiservice-cert\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.033384 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r42mr\" (UniqueName: \"kubernetes.io/projected/0db979e7-3ec7-4dea-a942-9311355d7dca-kube-api-access-r42mr\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.033873 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0db979e7-3ec7-4dea-a942-9311355d7dca-webhook-cert\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.039651 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0db979e7-3ec7-4dea-a942-9311355d7dca-apiservice-cert\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.039691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0db979e7-3ec7-4dea-a942-9311355d7dca-webhook-cert\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.052570 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r42mr\" (UniqueName: \"kubernetes.io/projected/0db979e7-3ec7-4dea-a942-9311355d7dca-kube-api-access-r42mr\") pod \"metallb-operator-controller-manager-744bd9595-tpzsg\" (UID: \"0db979e7-3ec7-4dea-a942-9311355d7dca\") " pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.238180 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.254462 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d"] Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.255675 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.259110 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.259216 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wzz69" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.259350 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.268426 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d"] Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.440019 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4s9\" (UniqueName: \"kubernetes.io/projected/4c6e4e25-33b0-47b5-826d-ba09dc42e398-kube-api-access-nn4s9\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.440578 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c6e4e25-33b0-47b5-826d-ba09dc42e398-webhook-cert\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.440611 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c6e4e25-33b0-47b5-826d-ba09dc42e398-apiservice-cert\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.541842 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4s9\" (UniqueName: \"kubernetes.io/projected/4c6e4e25-33b0-47b5-826d-ba09dc42e398-kube-api-access-nn4s9\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.541973 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c6e4e25-33b0-47b5-826d-ba09dc42e398-webhook-cert\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.542003 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c6e4e25-33b0-47b5-826d-ba09dc42e398-apiservice-cert\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.555269 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4c6e4e25-33b0-47b5-826d-ba09dc42e398-apiservice-cert\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.558551 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg"] Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.564410 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4c6e4e25-33b0-47b5-826d-ba09dc42e398-webhook-cert\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.564559 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4s9\" (UniqueName: \"kubernetes.io/projected/4c6e4e25-33b0-47b5-826d-ba09dc42e398-kube-api-access-nn4s9\") pod \"metallb-operator-webhook-server-86ccb96b46-7tj8d\" (UID: \"4c6e4e25-33b0-47b5-826d-ba09dc42e398\") " pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: W1206 03:20:15.572769 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db979e7_3ec7_4dea_a942_9311355d7dca.slice/crio-404c9e8904d891b67ca149dd906faca09ba4d904b532e2ec1005643aa8a9df57 WatchSource:0}: Error finding container 404c9e8904d891b67ca149dd906faca09ba4d904b532e2ec1005643aa8a9df57: Status 404 returned error can't find the container with id 404c9e8904d891b67ca149dd906faca09ba4d904b532e2ec1005643aa8a9df57 Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.623855 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.866054 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d"] Dec 06 03:20:15 crc kubenswrapper[4801]: I1206 03:20:15.871555 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" event={"ID":"0db979e7-3ec7-4dea-a942-9311355d7dca","Type":"ContainerStarted","Data":"404c9e8904d891b67ca149dd906faca09ba4d904b532e2ec1005643aa8a9df57"} Dec 06 03:20:15 crc kubenswrapper[4801]: W1206 03:20:15.875143 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c6e4e25_33b0_47b5_826d_ba09dc42e398.slice/crio-e039cef26ee6e21fd223cc0da7259dcd90c5186b9d9bc43801be114fdb19f551 WatchSource:0}: Error finding container e039cef26ee6e21fd223cc0da7259dcd90c5186b9d9bc43801be114fdb19f551: Status 404 returned error can't find the container with id e039cef26ee6e21fd223cc0da7259dcd90c5186b9d9bc43801be114fdb19f551 Dec 06 03:20:16 crc kubenswrapper[4801]: I1206 03:20:16.885046 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" event={"ID":"4c6e4e25-33b0-47b5-826d-ba09dc42e398","Type":"ContainerStarted","Data":"e039cef26ee6e21fd223cc0da7259dcd90c5186b9d9bc43801be114fdb19f551"} Dec 06 03:20:20 crc kubenswrapper[4801]: I1206 03:20:20.919603 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" event={"ID":"4c6e4e25-33b0-47b5-826d-ba09dc42e398","Type":"ContainerStarted","Data":"6bfd0734a84c9f54b584fa1f7eff304e9268aad382c2c86b4fa98ec90102ebe8"} Dec 06 03:20:20 crc kubenswrapper[4801]: I1206 03:20:20.920244 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:20 crc kubenswrapper[4801]: I1206 03:20:20.922307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" event={"ID":"0db979e7-3ec7-4dea-a942-9311355d7dca","Type":"ContainerStarted","Data":"979acc0114bd0b4eb4120a8d0365ed3a845707337562199e2c4d218e829ea953"} Dec 06 03:20:20 crc kubenswrapper[4801]: I1206 03:20:20.922553 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:20 crc kubenswrapper[4801]: I1206 03:20:20.958530 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" podStartSLOduration=1.5724784550000002 podStartE2EDuration="5.958496644s" podCreationTimestamp="2025-12-06 03:20:15 +0000 UTC" firstStartedPulling="2025-12-06 03:20:15.878134174 +0000 UTC m=+869.000741746" lastFinishedPulling="2025-12-06 03:20:20.264152363 +0000 UTC m=+873.386759935" observedRunningTime="2025-12-06 03:20:20.942914191 +0000 UTC m=+874.065521763" watchObservedRunningTime="2025-12-06 03:20:20.958496644 +0000 UTC m=+874.081104216" Dec 06 03:20:35 crc kubenswrapper[4801]: I1206 03:20:35.632505 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-86ccb96b46-7tj8d" Dec 06 03:20:35 crc kubenswrapper[4801]: I1206 03:20:35.668908 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" podStartSLOduration=17.003332401 podStartE2EDuration="21.668883786s" podCreationTimestamp="2025-12-06 03:20:14 +0000 UTC" firstStartedPulling="2025-12-06 03:20:15.576577068 +0000 UTC m=+868.699184640" lastFinishedPulling="2025-12-06 03:20:20.242128443 +0000 UTC m=+873.364736025" observedRunningTime="2025-12-06 03:20:20.986785348 +0000 UTC m=+874.109392920" watchObservedRunningTime="2025-12-06 03:20:35.668883786 +0000 UTC m=+888.791491388" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.241597 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-744bd9595-tpzsg" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.938586 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-fs6tf"] Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.942107 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.944716 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.945442 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.946686 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-p2wvh" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.947418 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk"] Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.948652 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.949917 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 06 03:20:55 crc kubenswrapper[4801]: I1206 03:20:55.968461 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk"] Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.032860 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7f8ct"] Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.033995 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.039039 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.039091 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.039151 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.039551 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bddpg" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.055729 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-cdtfj"] Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.057191 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058304 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058382 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-sockets\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058411 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics-certs\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058449 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn89q\" (UniqueName: \"kubernetes.io/projected/243b0e89-177c-4d78-a335-b8184f7f9cd3-kube-api-access-bn89q\") pod \"frr-k8s-webhook-server-7fcb986d4-lpggk\" (UID: \"243b0e89-177c-4d78-a335-b8184f7f9cd3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058609 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhddv\" (UniqueName: \"kubernetes.io/projected/fe7e5882-0bbb-477d-889d-0c6ba99ea883-kube-api-access-mhddv\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058649 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-conf\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058678 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-startup\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058740 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058746 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/243b0e89-177c-4d78-a335-b8184f7f9cd3-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lpggk\" (UID: \"243b0e89-177c-4d78-a335-b8184f7f9cd3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.058823 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-reloader\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.073174 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-cdtfj"] Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160387 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhddv\" (UniqueName: \"kubernetes.io/projected/fe7e5882-0bbb-477d-889d-0c6ba99ea883-kube-api-access-mhddv\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160452 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160476 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-metrics-certs\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160493 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-conf\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160513 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-startup\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160534 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/243b0e89-177c-4d78-a335-b8184f7f9cd3-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lpggk\" (UID: \"243b0e89-177c-4d78-a335-b8184f7f9cd3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160551 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw2n7\" (UniqueName: \"kubernetes.io/projected/47d040e7-00fd-42d1-a652-2a9ef2eb383e-kube-api-access-cw2n7\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160570 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-cert\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-reloader\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160615 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/47d040e7-00fd-42d1-a652-2a9ef2eb383e-metallb-excludel2\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160637 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160661 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lhsh\" (UniqueName: \"kubernetes.io/projected/22b5e566-52a6-48f6-9104-f61cf4dfdfce-kube-api-access-5lhsh\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160719 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-sockets\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160738 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics-certs\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160776 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-metrics-certs\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.160792 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn89q\" (UniqueName: \"kubernetes.io/projected/243b0e89-177c-4d78-a335-b8184f7f9cd3-kube-api-access-bn89q\") pod \"frr-k8s-webhook-server-7fcb986d4-lpggk\" (UID: \"243b0e89-177c-4d78-a335-b8184f7f9cd3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.161360 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.161420 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-sockets\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.161494 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-reloader\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.161554 4801 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.161642 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics-certs podName:fe7e5882-0bbb-477d-889d-0c6ba99ea883 nodeName:}" failed. No retries permitted until 2025-12-06 03:20:56.66161829 +0000 UTC m=+909.784225862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics-certs") pod "frr-k8s-fs6tf" (UID: "fe7e5882-0bbb-477d-889d-0c6ba99ea883") : secret "frr-k8s-certs-secret" not found Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.161923 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-conf\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.162036 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/fe7e5882-0bbb-477d-889d-0c6ba99ea883-frr-startup\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.175842 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/243b0e89-177c-4d78-a335-b8184f7f9cd3-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-lpggk\" (UID: \"243b0e89-177c-4d78-a335-b8184f7f9cd3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.180201 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhddv\" (UniqueName: \"kubernetes.io/projected/fe7e5882-0bbb-477d-889d-0c6ba99ea883-kube-api-access-mhddv\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.181444 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn89q\" (UniqueName: \"kubernetes.io/projected/243b0e89-177c-4d78-a335-b8184f7f9cd3-kube-api-access-bn89q\") pod \"frr-k8s-webhook-server-7fcb986d4-lpggk\" (UID: \"243b0e89-177c-4d78-a335-b8184f7f9cd3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.261972 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/47d040e7-00fd-42d1-a652-2a9ef2eb383e-metallb-excludel2\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.262798 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/47d040e7-00fd-42d1-a652-2a9ef2eb383e-metallb-excludel2\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.262898 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lhsh\" (UniqueName: \"kubernetes.io/projected/22b5e566-52a6-48f6-9104-f61cf4dfdfce-kube-api-access-5lhsh\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.263003 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-metrics-certs\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.263107 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.263181 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-metrics-certs\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.263258 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw2n7\" (UniqueName: \"kubernetes.io/projected/47d040e7-00fd-42d1-a652-2a9ef2eb383e-kube-api-access-cw2n7\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.263326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-cert\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.263424 4801 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.263523 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist podName:47d040e7-00fd-42d1-a652-2a9ef2eb383e nodeName:}" failed. No retries permitted until 2025-12-06 03:20:56.76349877 +0000 UTC m=+909.886106342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist") pod "speaker-7f8ct" (UID: "47d040e7-00fd-42d1-a652-2a9ef2eb383e") : secret "metallb-memberlist" not found Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.263708 4801 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.263782 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-metrics-certs podName:22b5e566-52a6-48f6-9104-f61cf4dfdfce nodeName:}" failed. No retries permitted until 2025-12-06 03:20:56.763771557 +0000 UTC m=+909.886379129 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-metrics-certs") pod "controller-f8648f98b-cdtfj" (UID: "22b5e566-52a6-48f6-9104-f61cf4dfdfce") : secret "controller-certs-secret" not found Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.265787 4801 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.280357 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-metrics-certs\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.290814 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-cert\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.297551 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lhsh\" (UniqueName: \"kubernetes.io/projected/22b5e566-52a6-48f6-9104-f61cf4dfdfce-kube-api-access-5lhsh\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.298202 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw2n7\" (UniqueName: \"kubernetes.io/projected/47d040e7-00fd-42d1-a652-2a9ef2eb383e-kube-api-access-cw2n7\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.301193 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.669628 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics-certs\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.674636 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fe7e5882-0bbb-477d-889d-0c6ba99ea883-metrics-certs\") pod \"frr-k8s-fs6tf\" (UID: \"fe7e5882-0bbb-477d-889d-0c6ba99ea883\") " pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.770586 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk"] Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.770899 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-metrics-certs\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.770961 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.771110 4801 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 03:20:56 crc kubenswrapper[4801]: E1206 03:20:56.771168 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist podName:47d040e7-00fd-42d1-a652-2a9ef2eb383e nodeName:}" failed. No retries permitted until 2025-12-06 03:20:57.771153528 +0000 UTC m=+910.893761100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist") pod "speaker-7f8ct" (UID: "47d040e7-00fd-42d1-a652-2a9ef2eb383e") : secret "metallb-memberlist" not found Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.776627 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/22b5e566-52a6-48f6-9104-f61cf4dfdfce-metrics-certs\") pod \"controller-f8648f98b-cdtfj\" (UID: \"22b5e566-52a6-48f6-9104-f61cf4dfdfce\") " pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.886337 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:20:56 crc kubenswrapper[4801]: I1206 03:20:56.975094 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:57 crc kubenswrapper[4801]: I1206 03:20:57.228593 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-cdtfj"] Dec 06 03:20:57 crc kubenswrapper[4801]: I1206 03:20:57.232241 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" event={"ID":"243b0e89-177c-4d78-a335-b8184f7f9cd3","Type":"ContainerStarted","Data":"d258284f9947c7faacfaa322ed03d8535eb2b7e0f5bdfbcd4c0ed4ef09ae5de8"} Dec 06 03:20:57 crc kubenswrapper[4801]: I1206 03:20:57.233445 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"e78e728ce30784ae62aa2a31e86fd060e826b124ccfc1684b5cd4e373c13eec9"} Dec 06 03:20:57 crc kubenswrapper[4801]: I1206 03:20:57.787864 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:57 crc kubenswrapper[4801]: I1206 03:20:57.798122 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/47d040e7-00fd-42d1-a652-2a9ef2eb383e-memberlist\") pod \"speaker-7f8ct\" (UID: \"47d040e7-00fd-42d1-a652-2a9ef2eb383e\") " pod="metallb-system/speaker-7f8ct" Dec 06 03:20:57 crc kubenswrapper[4801]: I1206 03:20:57.853528 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7f8ct" Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.244736 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cdtfj" event={"ID":"22b5e566-52a6-48f6-9104-f61cf4dfdfce","Type":"ContainerStarted","Data":"48d6db7ccf7f0942b0572c0c38a9c37cc0435fd37da755d243c3b430c01044de"} Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.245161 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cdtfj" event={"ID":"22b5e566-52a6-48f6-9104-f61cf4dfdfce","Type":"ContainerStarted","Data":"9086e5f6a92892ba488b77ec5b40b623d1c032e51d0911bef5cb57c5a084d209"} Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.245174 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-cdtfj" event={"ID":"22b5e566-52a6-48f6-9104-f61cf4dfdfce","Type":"ContainerStarted","Data":"b5ea5945afa8652ed66697627efd23a4baba803a20e47f2b38461cd9e4964b8f"} Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.245211 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.247022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7f8ct" event={"ID":"47d040e7-00fd-42d1-a652-2a9ef2eb383e","Type":"ContainerStarted","Data":"007f11652030b3c06e8915a92f461e499713f7075b1ec6c286ff79441eb5b5cf"} Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.247060 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7f8ct" event={"ID":"47d040e7-00fd-42d1-a652-2a9ef2eb383e","Type":"ContainerStarted","Data":"0951daa5f4f46ad30b0463949e40a8852f108df60962cb730ce7ae7015fd84d9"} Dec 06 03:20:58 crc kubenswrapper[4801]: I1206 03:20:58.262323 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-cdtfj" podStartSLOduration=2.262303462 podStartE2EDuration="2.262303462s" podCreationTimestamp="2025-12-06 03:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:20:58.260248187 +0000 UTC m=+911.382855759" watchObservedRunningTime="2025-12-06 03:20:58.262303462 +0000 UTC m=+911.384911034" Dec 06 03:20:59 crc kubenswrapper[4801]: I1206 03:20:59.262999 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7f8ct" event={"ID":"47d040e7-00fd-42d1-a652-2a9ef2eb383e","Type":"ContainerStarted","Data":"67f1868c709fdbb2e16a9acfe2a64995d9a5f01545f37037655e1f39e921bf1a"} Dec 06 03:20:59 crc kubenswrapper[4801]: I1206 03:20:59.263078 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7f8ct" Dec 06 03:20:59 crc kubenswrapper[4801]: I1206 03:20:59.281603 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7f8ct" podStartSLOduration=3.281585267 podStartE2EDuration="3.281585267s" podCreationTimestamp="2025-12-06 03:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:20:59.277608449 +0000 UTC m=+912.400216021" watchObservedRunningTime="2025-12-06 03:20:59.281585267 +0000 UTC m=+912.404192839" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.335492 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8mfpp"] Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.337617 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.359428 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8mfpp"] Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.393419 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-utilities\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.393504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2n72\" (UniqueName: \"kubernetes.io/projected/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-kube-api-access-r2n72\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.393540 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-catalog-content\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.495105 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2n72\" (UniqueName: \"kubernetes.io/projected/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-kube-api-access-r2n72\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.495161 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-catalog-content\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.495256 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-utilities\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.495869 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-utilities\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.496186 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-catalog-content\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.517216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2n72\" (UniqueName: \"kubernetes.io/projected/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-kube-api-access-r2n72\") pod \"community-operators-8mfpp\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:03 crc kubenswrapper[4801]: I1206 03:21:03.658440 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:05 crc kubenswrapper[4801]: I1206 03:21:05.588142 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8mfpp"] Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.347338 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" event={"ID":"243b0e89-177c-4d78-a335-b8184f7f9cd3","Type":"ContainerStarted","Data":"6ea19f586a71a01e7c34308bc0c4693454a849d1b4adaca93d15acff7082a7ed"} Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.347744 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.352208 4801 generic.go:334] "Generic (PLEG): container finished" podID="fe7e5882-0bbb-477d-889d-0c6ba99ea883" containerID="dc307ac4c83ad7e2535ec1388a8ce72851ff42fb4981d2936ae5500caee9a8c6" exitCode=0 Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.352326 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerDied","Data":"dc307ac4c83ad7e2535ec1388a8ce72851ff42fb4981d2936ae5500caee9a8c6"} Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.355673 4801 generic.go:334] "Generic (PLEG): container finished" podID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerID="b72d8af6c7ed382e9ada4cfa1f7453c1c435a5320de6cfb11f286352203ac400" exitCode=0 Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.355715 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mfpp" event={"ID":"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b","Type":"ContainerDied","Data":"b72d8af6c7ed382e9ada4cfa1f7453c1c435a5320de6cfb11f286352203ac400"} Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.355745 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mfpp" event={"ID":"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b","Type":"ContainerStarted","Data":"3a42d733027891af8fb67ae412fcc93456d23a79c6c27e2095aa8a6c10635391"} Dec 06 03:21:06 crc kubenswrapper[4801]: I1206 03:21:06.413307 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" podStartSLOduration=2.982539296 podStartE2EDuration="11.413270202s" podCreationTimestamp="2025-12-06 03:20:55 +0000 UTC" firstStartedPulling="2025-12-06 03:20:56.778709723 +0000 UTC m=+909.901317295" lastFinishedPulling="2025-12-06 03:21:05.209440629 +0000 UTC m=+918.332048201" observedRunningTime="2025-12-06 03:21:06.376359482 +0000 UTC m=+919.498967064" watchObservedRunningTime="2025-12-06 03:21:06.413270202 +0000 UTC m=+919.535877794" Dec 06 03:21:07 crc kubenswrapper[4801]: I1206 03:21:07.365487 4801 generic.go:334] "Generic (PLEG): container finished" podID="fe7e5882-0bbb-477d-889d-0c6ba99ea883" containerID="5d802a2d700f1ac04c21fad68616587cf42e25c7db28c0a22f06121764b6c8a6" exitCode=0 Dec 06 03:21:07 crc kubenswrapper[4801]: I1206 03:21:07.365555 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerDied","Data":"5d802a2d700f1ac04c21fad68616587cf42e25c7db28c0a22f06121764b6c8a6"} Dec 06 03:21:08 crc kubenswrapper[4801]: I1206 03:21:08.376652 4801 generic.go:334] "Generic (PLEG): container finished" podID="fe7e5882-0bbb-477d-889d-0c6ba99ea883" containerID="19ec91bf77f0f924c5c95a89b6f899baa453010719c67fd803523762541a70d3" exitCode=0 Dec 06 03:21:08 crc kubenswrapper[4801]: I1206 03:21:08.376778 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerDied","Data":"19ec91bf77f0f924c5c95a89b6f899baa453010719c67fd803523762541a70d3"} Dec 06 03:21:08 crc kubenswrapper[4801]: E1206 03:21:08.871421 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc65a4654_e2ff_4899_bddf_dc0e8a20ed7b.slice/crio-conmon-9e9bdf5c0c3d19b36d263822595c974d16b608a54a9ffd3cbaa4596c3ca8046f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc65a4654_e2ff_4899_bddf_dc0e8a20ed7b.slice/crio-9e9bdf5c0c3d19b36d263822595c974d16b608a54a9ffd3cbaa4596c3ca8046f.scope\": RecentStats: unable to find data in memory cache]" Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.386528 4801 generic.go:334] "Generic (PLEG): container finished" podID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerID="9e9bdf5c0c3d19b36d263822595c974d16b608a54a9ffd3cbaa4596c3ca8046f" exitCode=0 Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.386610 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mfpp" event={"ID":"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b","Type":"ContainerDied","Data":"9e9bdf5c0c3d19b36d263822595c974d16b608a54a9ffd3cbaa4596c3ca8046f"} Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.393587 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"ef91625ea99a7c1b0cb0bc38d69e365d8ab51dd7fca7ee514c204ffc3e5fff3d"} Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.393630 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"ec57cf3e4f3544dbe9b7d8670999ae6a1772ecf79474760812a494add7461605"} Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.393643 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"d06481a7f8fdc907940593450fea67bb0deb8dc7d0a952651ec7ea315138d063"} Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.393657 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"9f26a905fa685264e83dd3de4d35d00ad14112f8e0a06c90e6a1fbb1eb0ebef0"} Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.942999 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svqwx"] Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.945464 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:09 crc kubenswrapper[4801]: I1206 03:21:09.963370 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svqwx"] Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.064432 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-utilities\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.064485 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-catalog-content\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.064511 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkdn\" (UniqueName: \"kubernetes.io/projected/918d2940-3af7-4406-953f-aa0f83a07d13-kube-api-access-8wkdn\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.165410 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-utilities\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.165462 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-catalog-content\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.165483 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wkdn\" (UniqueName: \"kubernetes.io/projected/918d2940-3af7-4406-953f-aa0f83a07d13-kube-api-access-8wkdn\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.166248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-utilities\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.166396 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-catalog-content\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.202246 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wkdn\" (UniqueName: \"kubernetes.io/projected/918d2940-3af7-4406-953f-aa0f83a07d13-kube-api-access-8wkdn\") pod \"redhat-marketplace-svqwx\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.267506 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.402678 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mfpp" event={"ID":"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b","Type":"ContainerStarted","Data":"74f16433f7229ef3fa0e5580b4037c0c0a2d9cbd6200db2837bbe88469174630"} Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.407239 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"029a5f6c801b7912fe5383c743ea8ddb3bf9a78cb4f5caf345b951ca8a9893a5"} Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.407266 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-fs6tf" event={"ID":"fe7e5882-0bbb-477d-889d-0c6ba99ea883","Type":"ContainerStarted","Data":"0c607f41d4f465ffdd77ec1fc6612ab70430b652486cf8f7d3ded97f902fbb92"} Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.407857 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.445894 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8mfpp" podStartSLOduration=4.004984967 podStartE2EDuration="7.445866465s" podCreationTimestamp="2025-12-06 03:21:03 +0000 UTC" firstStartedPulling="2025-12-06 03:21:06.359266259 +0000 UTC m=+919.481873831" lastFinishedPulling="2025-12-06 03:21:09.800147757 +0000 UTC m=+922.922755329" observedRunningTime="2025-12-06 03:21:10.443990855 +0000 UTC m=+923.566598417" watchObservedRunningTime="2025-12-06 03:21:10.445866465 +0000 UTC m=+923.568474027" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.472720 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-fs6tf" podStartSLOduration=7.285921173 podStartE2EDuration="15.472704422s" podCreationTimestamp="2025-12-06 03:20:55 +0000 UTC" firstStartedPulling="2025-12-06 03:20:57.048305694 +0000 UTC m=+910.170913266" lastFinishedPulling="2025-12-06 03:21:05.235088933 +0000 UTC m=+918.357696515" observedRunningTime="2025-12-06 03:21:10.46966944 +0000 UTC m=+923.592277012" watchObservedRunningTime="2025-12-06 03:21:10.472704422 +0000 UTC m=+923.595311984" Dec 06 03:21:10 crc kubenswrapper[4801]: I1206 03:21:10.587471 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svqwx"] Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.169596 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.169916 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.418121 4801 generic.go:334] "Generic (PLEG): container finished" podID="918d2940-3af7-4406-953f-aa0f83a07d13" containerID="69511561e8f5d977b8eaa88ebf08957237663e87d6fe0605ea474fd5c579f4e4" exitCode=0 Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.418165 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svqwx" event={"ID":"918d2940-3af7-4406-953f-aa0f83a07d13","Type":"ContainerDied","Data":"69511561e8f5d977b8eaa88ebf08957237663e87d6fe0605ea474fd5c579f4e4"} Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.418216 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svqwx" event={"ID":"918d2940-3af7-4406-953f-aa0f83a07d13","Type":"ContainerStarted","Data":"4a8b61ff3dc5d6892003623a62f8128301c8baf30e7f4331b720ceeaa3b03a81"} Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.887357 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:21:11 crc kubenswrapper[4801]: I1206 03:21:11.943874 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:21:12 crc kubenswrapper[4801]: I1206 03:21:12.428893 4801 generic.go:334] "Generic (PLEG): container finished" podID="918d2940-3af7-4406-953f-aa0f83a07d13" containerID="aaf82584fd54817768104996e1fba2b8388f63f50272dca4a1ff930fa436137c" exitCode=0 Dec 06 03:21:12 crc kubenswrapper[4801]: I1206 03:21:12.429028 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svqwx" event={"ID":"918d2940-3af7-4406-953f-aa0f83a07d13","Type":"ContainerDied","Data":"aaf82584fd54817768104996e1fba2b8388f63f50272dca4a1ff930fa436137c"} Dec 06 03:21:13 crc kubenswrapper[4801]: I1206 03:21:13.438426 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svqwx" event={"ID":"918d2940-3af7-4406-953f-aa0f83a07d13","Type":"ContainerStarted","Data":"2f8513b5a920849b54c04ae358dc8bc5a1c4b6989843987f141add5a82cbe41b"} Dec 06 03:21:13 crc kubenswrapper[4801]: I1206 03:21:13.466602 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svqwx" podStartSLOduration=2.778437284 podStartE2EDuration="4.466581103s" podCreationTimestamp="2025-12-06 03:21:09 +0000 UTC" firstStartedPulling="2025-12-06 03:21:11.419948246 +0000 UTC m=+924.542555818" lastFinishedPulling="2025-12-06 03:21:13.108092055 +0000 UTC m=+926.230699637" observedRunningTime="2025-12-06 03:21:13.46387366 +0000 UTC m=+926.586481322" watchObservedRunningTime="2025-12-06 03:21:13.466581103 +0000 UTC m=+926.589188675" Dec 06 03:21:13 crc kubenswrapper[4801]: I1206 03:21:13.659389 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:13 crc kubenswrapper[4801]: I1206 03:21:13.659494 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:13 crc kubenswrapper[4801]: I1206 03:21:13.714622 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:14 crc kubenswrapper[4801]: I1206 03:21:14.491921 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:15 crc kubenswrapper[4801]: I1206 03:21:15.995112 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8mfpp"] Dec 06 03:21:16 crc kubenswrapper[4801]: I1206 03:21:16.310223 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-lpggk" Dec 06 03:21:16 crc kubenswrapper[4801]: I1206 03:21:16.463997 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8mfpp" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="registry-server" containerID="cri-o://74f16433f7229ef3fa0e5580b4037c0c0a2d9cbd6200db2837bbe88469174630" gracePeriod=2 Dec 06 03:21:16 crc kubenswrapper[4801]: I1206 03:21:16.980695 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-cdtfj" Dec 06 03:21:17 crc kubenswrapper[4801]: I1206 03:21:17.862217 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7f8ct" Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.486788 4801 generic.go:334] "Generic (PLEG): container finished" podID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerID="74f16433f7229ef3fa0e5580b4037c0c0a2d9cbd6200db2837bbe88469174630" exitCode=0 Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.486805 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mfpp" event={"ID":"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b","Type":"ContainerDied","Data":"74f16433f7229ef3fa0e5580b4037c0c0a2d9cbd6200db2837bbe88469174630"} Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.704318 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.833161 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2n72\" (UniqueName: \"kubernetes.io/projected/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-kube-api-access-r2n72\") pod \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.833928 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-utilities\") pod \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.834005 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-catalog-content\") pod \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\" (UID: \"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b\") " Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.837119 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-utilities" (OuterVolumeSpecName: "utilities") pod "c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" (UID: "c65a4654-e2ff-4899-bddf-dc0e8a20ed7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.857092 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-kube-api-access-r2n72" (OuterVolumeSpecName: "kube-api-access-r2n72") pod "c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" (UID: "c65a4654-e2ff-4899-bddf-dc0e8a20ed7b"). InnerVolumeSpecName "kube-api-access-r2n72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.935045 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:18 crc kubenswrapper[4801]: I1206 03:21:18.935091 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2n72\" (UniqueName: \"kubernetes.io/projected/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-kube-api-access-r2n72\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.034624 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" (UID: "c65a4654-e2ff-4899-bddf-dc0e8a20ed7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.036233 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.500660 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mfpp" event={"ID":"c65a4654-e2ff-4899-bddf-dc0e8a20ed7b","Type":"ContainerDied","Data":"3a42d733027891af8fb67ae412fcc93456d23a79c6c27e2095aa8a6c10635391"} Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.500787 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mfpp" Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.500815 4801 scope.go:117] "RemoveContainer" containerID="74f16433f7229ef3fa0e5580b4037c0c0a2d9cbd6200db2837bbe88469174630" Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.528383 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8mfpp"] Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.534134 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8mfpp"] Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.537928 4801 scope.go:117] "RemoveContainer" containerID="9e9bdf5c0c3d19b36d263822595c974d16b608a54a9ffd3cbaa4596c3ca8046f" Dec 06 03:21:19 crc kubenswrapper[4801]: I1206 03:21:19.569164 4801 scope.go:117] "RemoveContainer" containerID="b72d8af6c7ed382e9ada4cfa1f7453c1c435a5320de6cfb11f286352203ac400" Dec 06 03:21:20 crc kubenswrapper[4801]: I1206 03:21:20.267777 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:20 crc kubenswrapper[4801]: I1206 03:21:20.268292 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:20 crc kubenswrapper[4801]: I1206 03:21:20.341519 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:20 crc kubenswrapper[4801]: I1206 03:21:20.567879 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:21 crc kubenswrapper[4801]: I1206 03:21:21.220410 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" path="/var/lib/kubelet/pods/c65a4654-e2ff-4899-bddf-dc0e8a20ed7b/volumes" Dec 06 03:21:21 crc kubenswrapper[4801]: I1206 03:21:21.999569 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svqwx"] Dec 06 03:21:22 crc kubenswrapper[4801]: I1206 03:21:22.529183 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svqwx" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="registry-server" containerID="cri-o://2f8513b5a920849b54c04ae358dc8bc5a1c4b6989843987f141add5a82cbe41b" gracePeriod=2 Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.552525 4801 generic.go:334] "Generic (PLEG): container finished" podID="918d2940-3af7-4406-953f-aa0f83a07d13" containerID="2f8513b5a920849b54c04ae358dc8bc5a1c4b6989843987f141add5a82cbe41b" exitCode=0 Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.552623 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svqwx" event={"ID":"918d2940-3af7-4406-953f-aa0f83a07d13","Type":"ContainerDied","Data":"2f8513b5a920849b54c04ae358dc8bc5a1c4b6989843987f141add5a82cbe41b"} Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.616961 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xvjcr"] Dec 06 03:21:24 crc kubenswrapper[4801]: E1206 03:21:24.622510 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="extract-utilities" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.622541 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="extract-utilities" Dec 06 03:21:24 crc kubenswrapper[4801]: E1206 03:21:24.622576 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="extract-content" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.622585 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="extract-content" Dec 06 03:21:24 crc kubenswrapper[4801]: E1206 03:21:24.622603 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="registry-server" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.622612 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="registry-server" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.622803 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65a4654-e2ff-4899-bddf-dc0e8a20ed7b" containerName="registry-server" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.623659 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.626702 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-drjzl" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.627467 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.629564 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xvjcr"] Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.630217 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.641475 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ltj6\" (UniqueName: \"kubernetes.io/projected/73bc5fd9-16cd-4af0-aa93-6230d268eaf6-kube-api-access-8ltj6\") pod \"openstack-operator-index-xvjcr\" (UID: \"73bc5fd9-16cd-4af0-aa93-6230d268eaf6\") " pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.743336 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ltj6\" (UniqueName: \"kubernetes.io/projected/73bc5fd9-16cd-4af0-aa93-6230d268eaf6-kube-api-access-8ltj6\") pod \"openstack-operator-index-xvjcr\" (UID: \"73bc5fd9-16cd-4af0-aa93-6230d268eaf6\") " pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.769038 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ltj6\" (UniqueName: \"kubernetes.io/projected/73bc5fd9-16cd-4af0-aa93-6230d268eaf6-kube-api-access-8ltj6\") pod \"openstack-operator-index-xvjcr\" (UID: \"73bc5fd9-16cd-4af0-aa93-6230d268eaf6\") " pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.822184 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.843708 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-utilities\") pod \"918d2940-3af7-4406-953f-aa0f83a07d13\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.843804 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-catalog-content\") pod \"918d2940-3af7-4406-953f-aa0f83a07d13\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.843871 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wkdn\" (UniqueName: \"kubernetes.io/projected/918d2940-3af7-4406-953f-aa0f83a07d13-kube-api-access-8wkdn\") pod \"918d2940-3af7-4406-953f-aa0f83a07d13\" (UID: \"918d2940-3af7-4406-953f-aa0f83a07d13\") " Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.846124 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-utilities" (OuterVolumeSpecName: "utilities") pod "918d2940-3af7-4406-953f-aa0f83a07d13" (UID: "918d2940-3af7-4406-953f-aa0f83a07d13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.856724 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918d2940-3af7-4406-953f-aa0f83a07d13-kube-api-access-8wkdn" (OuterVolumeSpecName: "kube-api-access-8wkdn") pod "918d2940-3af7-4406-953f-aa0f83a07d13" (UID: "918d2940-3af7-4406-953f-aa0f83a07d13"). InnerVolumeSpecName "kube-api-access-8wkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.866078 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "918d2940-3af7-4406-953f-aa0f83a07d13" (UID: "918d2940-3af7-4406-953f-aa0f83a07d13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.945095 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.945143 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wkdn\" (UniqueName: \"kubernetes.io/projected/918d2940-3af7-4406-953f-aa0f83a07d13-kube-api-access-8wkdn\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.945157 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/918d2940-3af7-4406-953f-aa0f83a07d13-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:24 crc kubenswrapper[4801]: I1206 03:21:24.953976 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.435226 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xvjcr"] Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.568147 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svqwx" event={"ID":"918d2940-3af7-4406-953f-aa0f83a07d13","Type":"ContainerDied","Data":"4a8b61ff3dc5d6892003623a62f8128301c8baf30e7f4331b720ceeaa3b03a81"} Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.568253 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svqwx" Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.568314 4801 scope.go:117] "RemoveContainer" containerID="2f8513b5a920849b54c04ae358dc8bc5a1c4b6989843987f141add5a82cbe41b" Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.570739 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvjcr" event={"ID":"73bc5fd9-16cd-4af0-aa93-6230d268eaf6","Type":"ContainerStarted","Data":"86099a37992038acdcd45b982e17b4017e8d2e1464001bd1a77c2bfed39ccc7b"} Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.597404 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svqwx"] Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.602225 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svqwx"] Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.618910 4801 scope.go:117] "RemoveContainer" containerID="aaf82584fd54817768104996e1fba2b8388f63f50272dca4a1ff930fa436137c" Dec 06 03:21:25 crc kubenswrapper[4801]: I1206 03:21:25.721289 4801 scope.go:117] "RemoveContainer" containerID="69511561e8f5d977b8eaa88ebf08957237663e87d6fe0605ea474fd5c579f4e4" Dec 06 03:21:26 crc kubenswrapper[4801]: I1206 03:21:26.892488 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-fs6tf" Dec 06 03:21:27 crc kubenswrapper[4801]: I1206 03:21:27.223298 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" path="/var/lib/kubelet/pods/918d2940-3af7-4406-953f-aa0f83a07d13/volumes" Dec 06 03:21:29 crc kubenswrapper[4801]: I1206 03:21:29.636022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xvjcr" event={"ID":"73bc5fd9-16cd-4af0-aa93-6230d268eaf6","Type":"ContainerStarted","Data":"7f33b8877c526cc9f3ab615878102d8a46f6329da617527e5277d2d6f0db5702"} Dec 06 03:21:29 crc kubenswrapper[4801]: I1206 03:21:29.661140 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xvjcr" podStartSLOduration=2.518858512 podStartE2EDuration="5.661115753s" podCreationTimestamp="2025-12-06 03:21:24 +0000 UTC" firstStartedPulling="2025-12-06 03:21:25.427073224 +0000 UTC m=+938.549680836" lastFinishedPulling="2025-12-06 03:21:28.569330465 +0000 UTC m=+941.691938077" observedRunningTime="2025-12-06 03:21:29.659577702 +0000 UTC m=+942.782185354" watchObservedRunningTime="2025-12-06 03:21:29.661115753 +0000 UTC m=+942.783723335" Dec 06 03:21:34 crc kubenswrapper[4801]: I1206 03:21:34.954301 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:34 crc kubenswrapper[4801]: I1206 03:21:34.956370 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:34 crc kubenswrapper[4801]: I1206 03:21:34.996085 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:35 crc kubenswrapper[4801]: I1206 03:21:35.719514 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xvjcr" Dec 06 03:21:41 crc kubenswrapper[4801]: I1206 03:21:41.170292 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:21:41 crc kubenswrapper[4801]: I1206 03:21:41.170674 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.438604 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x"] Dec 06 03:21:42 crc kubenswrapper[4801]: E1206 03:21:42.438888 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="extract-utilities" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.438904 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="extract-utilities" Dec 06 03:21:42 crc kubenswrapper[4801]: E1206 03:21:42.438916 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="extract-content" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.438922 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="extract-content" Dec 06 03:21:42 crc kubenswrapper[4801]: E1206 03:21:42.438935 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="registry-server" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.438941 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="registry-server" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.439076 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="918d2940-3af7-4406-953f-aa0f83a07d13" containerName="registry-server" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.440120 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.442066 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-ldb6p" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.448126 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-util\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.448450 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-bundle\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.448571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvst\" (UniqueName: \"kubernetes.io/projected/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-kube-api-access-zzvst\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.459658 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x"] Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.549842 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-util\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.550234 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-bundle\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.550398 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvst\" (UniqueName: \"kubernetes.io/projected/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-kube-api-access-zzvst\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.550567 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-util\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.550669 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-bundle\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.574125 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvst\" (UniqueName: \"kubernetes.io/projected/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-kube-api-access-zzvst\") pod \"ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:42 crc kubenswrapper[4801]: I1206 03:21:42.757489 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:43 crc kubenswrapper[4801]: I1206 03:21:43.200851 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x"] Dec 06 03:21:43 crc kubenswrapper[4801]: W1206 03:21:43.213939 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded3e81c7_6078_42e2_a230_1dcb4b0ce766.slice/crio-8a1a42fc97b8df2cad8681e808244177ad86619325dce7ec747cd6761c5f7813 WatchSource:0}: Error finding container 8a1a42fc97b8df2cad8681e808244177ad86619325dce7ec747cd6761c5f7813: Status 404 returned error can't find the container with id 8a1a42fc97b8df2cad8681e808244177ad86619325dce7ec747cd6761c5f7813 Dec 06 03:21:43 crc kubenswrapper[4801]: I1206 03:21:43.736648 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" event={"ID":"ed3e81c7-6078-42e2-a230-1dcb4b0ce766","Type":"ContainerStarted","Data":"8a1a42fc97b8df2cad8681e808244177ad86619325dce7ec747cd6761c5f7813"} Dec 06 03:21:45 crc kubenswrapper[4801]: I1206 03:21:45.750960 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" event={"ID":"ed3e81c7-6078-42e2-a230-1dcb4b0ce766","Type":"ContainerStarted","Data":"a3fa28c45c43a3e64a967378f364d7a69ea18a1dee8f7a2c25f1d5bca177d3f0"} Dec 06 03:21:47 crc kubenswrapper[4801]: I1206 03:21:47.768587 4801 generic.go:334] "Generic (PLEG): container finished" podID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerID="a3fa28c45c43a3e64a967378f364d7a69ea18a1dee8f7a2c25f1d5bca177d3f0" exitCode=0 Dec 06 03:21:47 crc kubenswrapper[4801]: I1206 03:21:47.768668 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" event={"ID":"ed3e81c7-6078-42e2-a230-1dcb4b0ce766","Type":"ContainerDied","Data":"a3fa28c45c43a3e64a967378f364d7a69ea18a1dee8f7a2c25f1d5bca177d3f0"} Dec 06 03:21:48 crc kubenswrapper[4801]: I1206 03:21:48.797583 4801 generic.go:334] "Generic (PLEG): container finished" podID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerID="611ddce0bde44dd9306b5f7f5e926f8063add1c62cee205d7990f34e0d9ee71f" exitCode=0 Dec 06 03:21:48 crc kubenswrapper[4801]: I1206 03:21:48.799510 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" event={"ID":"ed3e81c7-6078-42e2-a230-1dcb4b0ce766","Type":"ContainerDied","Data":"611ddce0bde44dd9306b5f7f5e926f8063add1c62cee205d7990f34e0d9ee71f"} Dec 06 03:21:49 crc kubenswrapper[4801]: I1206 03:21:49.808254 4801 generic.go:334] "Generic (PLEG): container finished" podID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerID="d7ddbe2196d596ffde81952d1bb6267cbe8b7a87fe4399ef82909b773022f309" exitCode=0 Dec 06 03:21:49 crc kubenswrapper[4801]: I1206 03:21:49.808379 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" event={"ID":"ed3e81c7-6078-42e2-a230-1dcb4b0ce766","Type":"ContainerDied","Data":"d7ddbe2196d596ffde81952d1bb6267cbe8b7a87fe4399ef82909b773022f309"} Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.072893 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.188459 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzvst\" (UniqueName: \"kubernetes.io/projected/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-kube-api-access-zzvst\") pod \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.189417 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-util\") pod \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.189454 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-bundle\") pod \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\" (UID: \"ed3e81c7-6078-42e2-a230-1dcb4b0ce766\") " Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.190149 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-bundle" (OuterVolumeSpecName: "bundle") pod "ed3e81c7-6078-42e2-a230-1dcb4b0ce766" (UID: "ed3e81c7-6078-42e2-a230-1dcb4b0ce766"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.194410 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-kube-api-access-zzvst" (OuterVolumeSpecName: "kube-api-access-zzvst") pod "ed3e81c7-6078-42e2-a230-1dcb4b0ce766" (UID: "ed3e81c7-6078-42e2-a230-1dcb4b0ce766"). InnerVolumeSpecName "kube-api-access-zzvst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.291051 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzvst\" (UniqueName: \"kubernetes.io/projected/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-kube-api-access-zzvst\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.291085 4801 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.825398 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" event={"ID":"ed3e81c7-6078-42e2-a230-1dcb4b0ce766","Type":"ContainerDied","Data":"8a1a42fc97b8df2cad8681e808244177ad86619325dce7ec747cd6761c5f7813"} Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.825864 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1a42fc97b8df2cad8681e808244177ad86619325dce7ec747cd6761c5f7813" Dec 06 03:21:51 crc kubenswrapper[4801]: I1206 03:21:51.825487 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x" Dec 06 03:21:52 crc kubenswrapper[4801]: I1206 03:21:52.420105 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-util" (OuterVolumeSpecName: "util") pod "ed3e81c7-6078-42e2-a230-1dcb4b0ce766" (UID: "ed3e81c7-6078-42e2-a230-1dcb4b0ce766"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:21:52 crc kubenswrapper[4801]: I1206 03:21:52.506740 4801 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed3e81c7-6078-42e2-a230-1dcb4b0ce766-util\") on node \"crc\" DevicePath \"\"" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.611541 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz"] Dec 06 03:21:54 crc kubenswrapper[4801]: E1206 03:21:54.612129 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="pull" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.612146 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="pull" Dec 06 03:21:54 crc kubenswrapper[4801]: E1206 03:21:54.612158 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="util" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.612166 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="util" Dec 06 03:21:54 crc kubenswrapper[4801]: E1206 03:21:54.612192 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="extract" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.612200 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="extract" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.612332 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed3e81c7-6078-42e2-a230-1dcb4b0ce766" containerName="extract" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.612873 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.617042 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-sfmmn" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.644556 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz"] Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.744990 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4pp\" (UniqueName: \"kubernetes.io/projected/23d978c7-b8fb-4796-b41e-4805344aa517-kube-api-access-lp4pp\") pod \"openstack-operator-controller-operator-6f9c47f684-gjctz\" (UID: \"23d978c7-b8fb-4796-b41e-4805344aa517\") " pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.846382 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4pp\" (UniqueName: \"kubernetes.io/projected/23d978c7-b8fb-4796-b41e-4805344aa517-kube-api-access-lp4pp\") pod \"openstack-operator-controller-operator-6f9c47f684-gjctz\" (UID: \"23d978c7-b8fb-4796-b41e-4805344aa517\") " pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.873146 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4pp\" (UniqueName: \"kubernetes.io/projected/23d978c7-b8fb-4796-b41e-4805344aa517-kube-api-access-lp4pp\") pod \"openstack-operator-controller-operator-6f9c47f684-gjctz\" (UID: \"23d978c7-b8fb-4796-b41e-4805344aa517\") " pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:21:54 crc kubenswrapper[4801]: I1206 03:21:54.931806 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:21:55 crc kubenswrapper[4801]: I1206 03:21:55.187676 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz"] Dec 06 03:21:55 crc kubenswrapper[4801]: W1206 03:21:55.196234 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23d978c7_b8fb_4796_b41e_4805344aa517.slice/crio-150043d247751c59977f3a90262262102944d4022bd93702398e81ccee857a36 WatchSource:0}: Error finding container 150043d247751c59977f3a90262262102944d4022bd93702398e81ccee857a36: Status 404 returned error can't find the container with id 150043d247751c59977f3a90262262102944d4022bd93702398e81ccee857a36 Dec 06 03:21:55 crc kubenswrapper[4801]: I1206 03:21:55.857414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" event={"ID":"23d978c7-b8fb-4796-b41e-4805344aa517","Type":"ContainerStarted","Data":"150043d247751c59977f3a90262262102944d4022bd93702398e81ccee857a36"} Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.714248 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qh2xr"] Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.716136 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.732332 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qh2xr"] Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.847465 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-utilities\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.847556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-catalog-content\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.847618 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdn2\" (UniqueName: \"kubernetes.io/projected/1154a7a2-7a4d-4245-a154-5b2a46fa7383-kube-api-access-fcdn2\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.949793 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-utilities\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.949878 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-catalog-content\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.949934 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdn2\" (UniqueName: \"kubernetes.io/projected/1154a7a2-7a4d-4245-a154-5b2a46fa7383-kube-api-access-fcdn2\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.950387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-utilities\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.950519 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-catalog-content\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:21:59 crc kubenswrapper[4801]: I1206 03:21:59.976192 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdn2\" (UniqueName: \"kubernetes.io/projected/1154a7a2-7a4d-4245-a154-5b2a46fa7383-kube-api-access-fcdn2\") pod \"certified-operators-qh2xr\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:00 crc kubenswrapper[4801]: I1206 03:22:00.051798 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:01 crc kubenswrapper[4801]: I1206 03:22:01.637682 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qh2xr"] Dec 06 03:22:01 crc kubenswrapper[4801]: W1206 03:22:01.648743 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1154a7a2_7a4d_4245_a154_5b2a46fa7383.slice/crio-2dcdb7d66810cef54eec52a99a0c60086c286faadce7bccd365e318658a0836b WatchSource:0}: Error finding container 2dcdb7d66810cef54eec52a99a0c60086c286faadce7bccd365e318658a0836b: Status 404 returned error can't find the container with id 2dcdb7d66810cef54eec52a99a0c60086c286faadce7bccd365e318658a0836b Dec 06 03:22:01 crc kubenswrapper[4801]: I1206 03:22:01.903673 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerStarted","Data":"2dcdb7d66810cef54eec52a99a0c60086c286faadce7bccd365e318658a0836b"} Dec 06 03:22:02 crc kubenswrapper[4801]: I1206 03:22:02.911788 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" event={"ID":"23d978c7-b8fb-4796-b41e-4805344aa517","Type":"ContainerStarted","Data":"cdc2548ececb3b8398cf99c69ce31bbb9a9dd9405c7a1042af87a4f8cebf88af"} Dec 06 03:22:02 crc kubenswrapper[4801]: I1206 03:22:02.912931 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:22:02 crc kubenswrapper[4801]: I1206 03:22:02.913542 4801 generic.go:334] "Generic (PLEG): container finished" podID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerID="ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6" exitCode=0 Dec 06 03:22:02 crc kubenswrapper[4801]: I1206 03:22:02.913599 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerDied","Data":"ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6"} Dec 06 03:22:02 crc kubenswrapper[4801]: I1206 03:22:02.941802 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" podStartSLOduration=2.872307572 podStartE2EDuration="8.941785179s" podCreationTimestamp="2025-12-06 03:21:54 +0000 UTC" firstStartedPulling="2025-12-06 03:21:55.198999974 +0000 UTC m=+968.321607546" lastFinishedPulling="2025-12-06 03:22:01.268477581 +0000 UTC m=+974.391085153" observedRunningTime="2025-12-06 03:22:02.939999801 +0000 UTC m=+976.062607383" watchObservedRunningTime="2025-12-06 03:22:02.941785179 +0000 UTC m=+976.064392741" Dec 06 03:22:06 crc kubenswrapper[4801]: I1206 03:22:06.940868 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerStarted","Data":"11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73"} Dec 06 03:22:07 crc kubenswrapper[4801]: I1206 03:22:07.948272 4801 generic.go:334] "Generic (PLEG): container finished" podID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerID="11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73" exitCode=0 Dec 06 03:22:07 crc kubenswrapper[4801]: I1206 03:22:07.948322 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerDied","Data":"11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73"} Dec 06 03:22:08 crc kubenswrapper[4801]: I1206 03:22:08.958963 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerStarted","Data":"5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a"} Dec 06 03:22:08 crc kubenswrapper[4801]: I1206 03:22:08.981887 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qh2xr" podStartSLOduration=4.126998599 podStartE2EDuration="9.98187133s" podCreationTimestamp="2025-12-06 03:21:59 +0000 UTC" firstStartedPulling="2025-12-06 03:22:02.915877244 +0000 UTC m=+976.038484816" lastFinishedPulling="2025-12-06 03:22:08.770749965 +0000 UTC m=+981.893357547" observedRunningTime="2025-12-06 03:22:08.978383547 +0000 UTC m=+982.100991119" watchObservedRunningTime="2025-12-06 03:22:08.98187133 +0000 UTC m=+982.104478902" Dec 06 03:22:10 crc kubenswrapper[4801]: I1206 03:22:10.052863 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:10 crc kubenswrapper[4801]: I1206 03:22:10.053162 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.102717 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qh2xr" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="registry-server" probeResult="failure" output=< Dec 06 03:22:11 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 03:22:11 crc kubenswrapper[4801]: > Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.170157 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.170233 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.170291 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.171132 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc0ec1db27713faa2819e59d2236a16fed1ad4e4c8174b604a5bb2c54258d36"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.171206 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://1bc0ec1db27713faa2819e59d2236a16fed1ad4e4c8174b604a5bb2c54258d36" gracePeriod=600 Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.981139 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="1bc0ec1db27713faa2819e59d2236a16fed1ad4e4c8174b604a5bb2c54258d36" exitCode=0 Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.981175 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"1bc0ec1db27713faa2819e59d2236a16fed1ad4e4c8174b604a5bb2c54258d36"} Dec 06 03:22:11 crc kubenswrapper[4801]: I1206 03:22:11.981220 4801 scope.go:117] "RemoveContainer" containerID="be334762e587af043257db835d7f2dd94a8df53291dd8b1402e414ca26dd1b3c" Dec 06 03:22:13 crc kubenswrapper[4801]: I1206 03:22:13.996772 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"ff81fd67675c4763c098dcc0a53f067a4ce5fbfac499868e5be530bd2f0ce8c0"} Dec 06 03:22:14 crc kubenswrapper[4801]: I1206 03:22:14.936411 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6f9c47f684-gjctz" Dec 06 03:22:20 crc kubenswrapper[4801]: I1206 03:22:20.122616 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:20 crc kubenswrapper[4801]: I1206 03:22:20.164568 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:22 crc kubenswrapper[4801]: I1206 03:22:22.503485 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qh2xr"] Dec 06 03:22:22 crc kubenswrapper[4801]: I1206 03:22:22.504268 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qh2xr" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="registry-server" containerID="cri-o://5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a" gracePeriod=2 Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.789430 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.920185 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-catalog-content\") pod \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.920242 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-utilities\") pod \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.920323 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcdn2\" (UniqueName: \"kubernetes.io/projected/1154a7a2-7a4d-4245-a154-5b2a46fa7383-kube-api-access-fcdn2\") pod \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\" (UID: \"1154a7a2-7a4d-4245-a154-5b2a46fa7383\") " Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.921670 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-utilities" (OuterVolumeSpecName: "utilities") pod "1154a7a2-7a4d-4245-a154-5b2a46fa7383" (UID: "1154a7a2-7a4d-4245-a154-5b2a46fa7383"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.933132 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1154a7a2-7a4d-4245-a154-5b2a46fa7383-kube-api-access-fcdn2" (OuterVolumeSpecName: "kube-api-access-fcdn2") pod "1154a7a2-7a4d-4245-a154-5b2a46fa7383" (UID: "1154a7a2-7a4d-4245-a154-5b2a46fa7383"). InnerVolumeSpecName "kube-api-access-fcdn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:22:24 crc kubenswrapper[4801]: I1206 03:22:24.976038 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1154a7a2-7a4d-4245-a154-5b2a46fa7383" (UID: "1154a7a2-7a4d-4245-a154-5b2a46fa7383"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.022660 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.022727 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1154a7a2-7a4d-4245-a154-5b2a46fa7383-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.022740 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcdn2\" (UniqueName: \"kubernetes.io/projected/1154a7a2-7a4d-4245-a154-5b2a46fa7383-kube-api-access-fcdn2\") on node \"crc\" DevicePath \"\"" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.085871 4801 generic.go:334] "Generic (PLEG): container finished" podID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerID="5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a" exitCode=0 Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.085979 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerDied","Data":"5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a"} Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.086023 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qh2xr" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.086056 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qh2xr" event={"ID":"1154a7a2-7a4d-4245-a154-5b2a46fa7383","Type":"ContainerDied","Data":"2dcdb7d66810cef54eec52a99a0c60086c286faadce7bccd365e318658a0836b"} Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.086083 4801 scope.go:117] "RemoveContainer" containerID="5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.133808 4801 scope.go:117] "RemoveContainer" containerID="11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.138360 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qh2xr"] Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.142029 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qh2xr"] Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.161941 4801 scope.go:117] "RemoveContainer" containerID="ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.200079 4801 scope.go:117] "RemoveContainer" containerID="5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a" Dec 06 03:22:25 crc kubenswrapper[4801]: E1206 03:22:25.200896 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a\": container with ID starting with 5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a not found: ID does not exist" containerID="5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.200934 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a"} err="failed to get container status \"5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a\": rpc error: code = NotFound desc = could not find container \"5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a\": container with ID starting with 5fb78039b33270e884698cb7d542f9fc88890d653fe2ca2a4c8150c6f4216a8a not found: ID does not exist" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.200962 4801 scope.go:117] "RemoveContainer" containerID="11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73" Dec 06 03:22:25 crc kubenswrapper[4801]: E1206 03:22:25.201404 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73\": container with ID starting with 11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73 not found: ID does not exist" containerID="11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.201432 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73"} err="failed to get container status \"11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73\": rpc error: code = NotFound desc = could not find container \"11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73\": container with ID starting with 11908bc516b6c220afdad0317f14fbd82eed496c5f85f4fee03ece602cf9fd73 not found: ID does not exist" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.201444 4801 scope.go:117] "RemoveContainer" containerID="ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6" Dec 06 03:22:25 crc kubenswrapper[4801]: E1206 03:22:25.202141 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6\": container with ID starting with ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6 not found: ID does not exist" containerID="ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.202234 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6"} err="failed to get container status \"ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6\": rpc error: code = NotFound desc = could not find container \"ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6\": container with ID starting with ea7ace5a17b42cd8e41467689f4eaf8f89f5bb1fb0bd9d7867d0c757bd96c0b6 not found: ID does not exist" Dec 06 03:22:25 crc kubenswrapper[4801]: I1206 03:22:25.222232 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" path="/var/lib/kubelet/pods/1154a7a2-7a4d-4245-a154-5b2a46fa7383/volumes" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.035447 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst"] Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.036334 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="extract-content" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.036352 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="extract-content" Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.036366 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="extract-utilities" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.036445 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="extract-utilities" Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.036466 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="registry-server" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.036477 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="registry-server" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.036660 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1154a7a2-7a4d-4245-a154-5b2a46fa7383" containerName="registry-server" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.037419 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.042992 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wmgnf" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.054278 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.056811 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.063625 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8xkr6" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.068459 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.086168 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.092385 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqv4g\" (UniqueName: \"kubernetes.io/projected/bf89afba-23bf-4d4e-8de6-58be01700897-kube-api-access-rqv4g\") pod \"barbican-operator-controller-manager-7d9dfd778-6tvst\" (UID: \"bf89afba-23bf-4d4e-8de6-58be01700897\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.092456 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srk57\" (UniqueName: \"kubernetes.io/projected/075bc058-a6db-435f-b4da-78d269436fc5-kube-api-access-srk57\") pod \"cinder-operator-controller-manager-68dd88d65f-bgnqt\" (UID: \"075bc058-a6db-435f-b4da-78d269436fc5\") " pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.109776 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.110792 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.112694 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-55qpr" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.141055 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.142111 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.162739 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9b5sw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.170502 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.192741 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.194036 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvvwt\" (UniqueName: \"kubernetes.io/projected/6a1623fb-e41b-4fb4-ad84-a9d95a642210-kube-api-access-wvvwt\") pod \"designate-operator-controller-manager-78b4bc895b-2ft2k\" (UID: \"6a1623fb-e41b-4fb4-ad84-a9d95a642210\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.194127 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqv4g\" (UniqueName: \"kubernetes.io/projected/bf89afba-23bf-4d4e-8de6-58be01700897-kube-api-access-rqv4g\") pod \"barbican-operator-controller-manager-7d9dfd778-6tvst\" (UID: \"bf89afba-23bf-4d4e-8de6-58be01700897\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.194185 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8mxz\" (UniqueName: \"kubernetes.io/projected/83987163-dcd4-42d5-98fb-155bc07daf26-kube-api-access-k8mxz\") pod \"heat-operator-controller-manager-5f64f6f8bb-b659z\" (UID: \"83987163-dcd4-42d5-98fb-155bc07daf26\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.194239 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srk57\" (UniqueName: \"kubernetes.io/projected/075bc058-a6db-435f-b4da-78d269436fc5-kube-api-access-srk57\") pod \"cinder-operator-controller-manager-68dd88d65f-bgnqt\" (UID: \"075bc058-a6db-435f-b4da-78d269436fc5\") " pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.202158 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.206879 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.209595 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.218232 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-pgwgr" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.219354 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.226734 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-64wk2" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.239630 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srk57\" (UniqueName: \"kubernetes.io/projected/075bc058-a6db-435f-b4da-78d269436fc5-kube-api-access-srk57\") pod \"cinder-operator-controller-manager-68dd88d65f-bgnqt\" (UID: \"075bc058-a6db-435f-b4da-78d269436fc5\") " pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.255187 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqv4g\" (UniqueName: \"kubernetes.io/projected/bf89afba-23bf-4d4e-8de6-58be01700897-kube-api-access-rqv4g\") pod \"barbican-operator-controller-manager-7d9dfd778-6tvst\" (UID: \"bf89afba-23bf-4d4e-8de6-58be01700897\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.258058 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.262720 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.294537 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.295935 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.298750 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8mxz\" (UniqueName: \"kubernetes.io/projected/83987163-dcd4-42d5-98fb-155bc07daf26-kube-api-access-k8mxz\") pod \"heat-operator-controller-manager-5f64f6f8bb-b659z\" (UID: \"83987163-dcd4-42d5-98fb-155bc07daf26\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.298823 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88df9\" (UniqueName: \"kubernetes.io/projected/936dc55c-43bb-4e3d-8970-0811d582232a-kube-api-access-88df9\") pod \"horizon-operator-controller-manager-68c6d99b8f-zsdh6\" (UID: \"936dc55c-43bb-4e3d-8970-0811d582232a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.298900 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvvwt\" (UniqueName: \"kubernetes.io/projected/6a1623fb-e41b-4fb4-ad84-a9d95a642210-kube-api-access-wvvwt\") pod \"designate-operator-controller-manager-78b4bc895b-2ft2k\" (UID: \"6a1623fb-e41b-4fb4-ad84-a9d95a642210\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.298934 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmvj6\" (UniqueName: \"kubernetes.io/projected/c9d1b6fe-6fbe-42b4-b6d3-88864f542000-kube-api-access-hmvj6\") pod \"glance-operator-controller-manager-77987cd8cd-kfdvh\" (UID: \"c9d1b6fe-6fbe-42b4-b6d3-88864f542000\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.305325 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.305443 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ng5q8" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.310293 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.335505 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.336913 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.343156 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8dt59" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.343775 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.344876 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.347801 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-m4hlt" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.352587 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8mxz\" (UniqueName: \"kubernetes.io/projected/83987163-dcd4-42d5-98fb-155bc07daf26-kube-api-access-k8mxz\") pod \"heat-operator-controller-manager-5f64f6f8bb-b659z\" (UID: \"83987163-dcd4-42d5-98fb-155bc07daf26\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.354639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvvwt\" (UniqueName: \"kubernetes.io/projected/6a1623fb-e41b-4fb4-ad84-a9d95a642210-kube-api-access-wvvwt\") pod \"designate-operator-controller-manager-78b4bc895b-2ft2k\" (UID: \"6a1623fb-e41b-4fb4-ad84-a9d95a642210\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.368194 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.384514 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.392433 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.393642 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.395697 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-w7c2k" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.396035 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.401548 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88df9\" (UniqueName: \"kubernetes.io/projected/936dc55c-43bb-4e3d-8970-0811d582232a-kube-api-access-88df9\") pod \"horizon-operator-controller-manager-68c6d99b8f-zsdh6\" (UID: \"936dc55c-43bb-4e3d-8970-0811d582232a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.401858 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbpq5\" (UniqueName: \"kubernetes.io/projected/528abee5-1816-4693-8c8d-ec8addacf287-kube-api-access-kbpq5\") pod \"keystone-operator-controller-manager-7765d96ddf-d2q7s\" (UID: \"528abee5-1816-4693-8c8d-ec8addacf287\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.401947 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2r5z\" (UniqueName: \"kubernetes.io/projected/adf5388c-f2b1-4cce-9616-03c9ecde87e8-kube-api-access-z2r5z\") pod \"manila-operator-controller-manager-7c79b5df47-7qljd\" (UID: \"adf5388c-f2b1-4cce-9616-03c9ecde87e8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.402059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmvj6\" (UniqueName: \"kubernetes.io/projected/c9d1b6fe-6fbe-42b4-b6d3-88864f542000-kube-api-access-hmvj6\") pod \"glance-operator-controller-manager-77987cd8cd-kfdvh\" (UID: \"c9d1b6fe-6fbe-42b4-b6d3-88864f542000\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.402175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.402281 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7f2\" (UniqueName: \"kubernetes.io/projected/5e8887af-c61f-4cb5-83ae-c0a62adfb3b2-kube-api-access-sb7f2\") pod \"ironic-operator-controller-manager-6c548fd776-9gv2p\" (UID: \"5e8887af-c61f-4cb5-83ae-c0a62adfb3b2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.402373 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9r6\" (UniqueName: \"kubernetes.io/projected/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-kube-api-access-qg9r6\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.411676 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.421895 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.428184 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.440005 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.446628 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.447580 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.448574 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.455458 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88df9\" (UniqueName: \"kubernetes.io/projected/936dc55c-43bb-4e3d-8970-0811d582232a-kube-api-access-88df9\") pod \"horizon-operator-controller-manager-68c6d99b8f-zsdh6\" (UID: \"936dc55c-43bb-4e3d-8970-0811d582232a\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.454171 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g9xgd" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.456197 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tntlf" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.470817 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.472019 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.476327 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-l7rxl" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.477775 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmvj6\" (UniqueName: \"kubernetes.io/projected/c9d1b6fe-6fbe-42b4-b6d3-88864f542000-kube-api-access-hmvj6\") pod \"glance-operator-controller-manager-77987cd8cd-kfdvh\" (UID: \"c9d1b6fe-6fbe-42b4-b6d3-88864f542000\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.491913 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.494218 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.500905 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.503889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.503965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb7f2\" (UniqueName: \"kubernetes.io/projected/5e8887af-c61f-4cb5-83ae-c0a62adfb3b2-kube-api-access-sb7f2\") pod \"ironic-operator-controller-manager-6c548fd776-9gv2p\" (UID: \"5e8887af-c61f-4cb5-83ae-c0a62adfb3b2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.504007 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9r6\" (UniqueName: \"kubernetes.io/projected/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-kube-api-access-qg9r6\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.504059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbpq5\" (UniqueName: \"kubernetes.io/projected/528abee5-1816-4693-8c8d-ec8addacf287-kube-api-access-kbpq5\") pod \"keystone-operator-controller-manager-7765d96ddf-d2q7s\" (UID: \"528abee5-1816-4693-8c8d-ec8addacf287\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.504083 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2r5z\" (UniqueName: \"kubernetes.io/projected/adf5388c-f2b1-4cce-9616-03c9ecde87e8-kube-api-access-z2r5z\") pod \"manila-operator-controller-manager-7c79b5df47-7qljd\" (UID: \"adf5388c-f2b1-4cce-9616-03c9ecde87e8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.504863 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.505202 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert podName:cd4c204b-eb70-4ed7-8800-9c0aa8df0894 nodeName:}" failed. No retries permitted until 2025-12-06 03:22:44.005178706 +0000 UTC m=+1017.127786278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert") pod "infra-operator-controller-manager-78d48bff9d-tb9mp" (UID: "cd4c204b-eb70-4ed7-8800-9c0aa8df0894") : secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.515773 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.540492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2r5z\" (UniqueName: \"kubernetes.io/projected/adf5388c-f2b1-4cce-9616-03c9ecde87e8-kube-api-access-z2r5z\") pod \"manila-operator-controller-manager-7c79b5df47-7qljd\" (UID: \"adf5388c-f2b1-4cce-9616-03c9ecde87e8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.541147 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb7f2\" (UniqueName: \"kubernetes.io/projected/5e8887af-c61f-4cb5-83ae-c0a62adfb3b2-kube-api-access-sb7f2\") pod \"ironic-operator-controller-manager-6c548fd776-9gv2p\" (UID: \"5e8887af-c61f-4cb5-83ae-c0a62adfb3b2\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.543610 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.544734 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.549348 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bnjng" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.549446 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.552110 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9r6\" (UniqueName: \"kubernetes.io/projected/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-kube-api-access-qg9r6\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.556088 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.557237 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.558487 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbpq5\" (UniqueName: \"kubernetes.io/projected/528abee5-1816-4693-8c8d-ec8addacf287-kube-api-access-kbpq5\") pod \"keystone-operator-controller-manager-7765d96ddf-d2q7s\" (UID: \"528abee5-1816-4693-8c8d-ec8addacf287\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.564952 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.585700 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.591415 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.591489 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r2h27" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.608352 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpd85\" (UniqueName: \"kubernetes.io/projected/01a93f61-bdef-4ff2-9f14-357a4737f0fa-kube-api-access-rpd85\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-95rj5\" (UID: \"01a93f61-bdef-4ff2-9f14-357a4737f0fa\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.608414 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5n6w\" (UniqueName: \"kubernetes.io/projected/7cd29fbc-0b7b-4619-97b5-febfdd86a6e2-kube-api-access-h5n6w\") pod \"nova-operator-controller-manager-697bc559fc-5m2qc\" (UID: \"7cd29fbc-0b7b-4619-97b5-febfdd86a6e2\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.608470 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zblvc\" (UniqueName: \"kubernetes.io/projected/c84bc554-95d1-4cb3-889e-e3eb348d5b37-kube-api-access-zblvc\") pod \"octavia-operator-controller-manager-998648c74-fcq4r\" (UID: \"c84bc554-95d1-4cb3-889e-e3eb348d5b37\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.608508 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56rs\" (UniqueName: \"kubernetes.io/projected/67793857-efbd-4ac4-8c3d-0f5f508ae3ee-kube-api-access-x56rs\") pod \"mariadb-operator-controller-manager-56bbcc9d85-xm85p\" (UID: \"67793857-efbd-4ac4-8c3d-0f5f508ae3ee\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.608532 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.608561 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klctn\" (UniqueName: \"kubernetes.io/projected/16612e3e-2588-413f-b0ff-0a97864485ca-kube-api-access-klctn\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.688899 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.710945 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpd85\" (UniqueName: \"kubernetes.io/projected/01a93f61-bdef-4ff2-9f14-357a4737f0fa-kube-api-access-rpd85\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-95rj5\" (UID: \"01a93f61-bdef-4ff2-9f14-357a4737f0fa\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.711038 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5n6w\" (UniqueName: \"kubernetes.io/projected/7cd29fbc-0b7b-4619-97b5-febfdd86a6e2-kube-api-access-h5n6w\") pod \"nova-operator-controller-manager-697bc559fc-5m2qc\" (UID: \"7cd29fbc-0b7b-4619-97b5-febfdd86a6e2\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.711114 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zblvc\" (UniqueName: \"kubernetes.io/projected/c84bc554-95d1-4cb3-889e-e3eb348d5b37-kube-api-access-zblvc\") pod \"octavia-operator-controller-manager-998648c74-fcq4r\" (UID: \"c84bc554-95d1-4cb3-889e-e3eb348d5b37\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.711225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56rs\" (UniqueName: \"kubernetes.io/projected/67793857-efbd-4ac4-8c3d-0f5f508ae3ee-kube-api-access-x56rs\") pod \"mariadb-operator-controller-manager-56bbcc9d85-xm85p\" (UID: \"67793857-efbd-4ac4-8c3d-0f5f508ae3ee\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.711286 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.711324 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klctn\" (UniqueName: \"kubernetes.io/projected/16612e3e-2588-413f-b0ff-0a97864485ca-kube-api-access-klctn\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.711732 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:43 crc kubenswrapper[4801]: E1206 03:22:43.711931 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert podName:16612e3e-2588-413f-b0ff-0a97864485ca nodeName:}" failed. No retries permitted until 2025-12-06 03:22:44.211910394 +0000 UTC m=+1017.334517966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" (UID: "16612e3e-2588-413f-b0ff-0a97864485ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.726189 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.744972 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56rs\" (UniqueName: \"kubernetes.io/projected/67793857-efbd-4ac4-8c3d-0f5f508ae3ee-kube-api-access-x56rs\") pod \"mariadb-operator-controller-manager-56bbcc9d85-xm85p\" (UID: \"67793857-efbd-4ac4-8c3d-0f5f508ae3ee\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.758722 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klctn\" (UniqueName: \"kubernetes.io/projected/16612e3e-2588-413f-b0ff-0a97864485ca-kube-api-access-klctn\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.761361 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zblvc\" (UniqueName: \"kubernetes.io/projected/c84bc554-95d1-4cb3-889e-e3eb348d5b37-kube-api-access-zblvc\") pod \"octavia-operator-controller-manager-998648c74-fcq4r\" (UID: \"c84bc554-95d1-4cb3-889e-e3eb348d5b37\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.772592 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5n6w\" (UniqueName: \"kubernetes.io/projected/7cd29fbc-0b7b-4619-97b5-febfdd86a6e2-kube-api-access-h5n6w\") pod \"nova-operator-controller-manager-697bc559fc-5m2qc\" (UID: \"7cd29fbc-0b7b-4619-97b5-febfdd86a6e2\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.776209 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.778636 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpd85\" (UniqueName: \"kubernetes.io/projected/01a93f61-bdef-4ff2-9f14-357a4737f0fa-kube-api-access-rpd85\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-95rj5\" (UID: \"01a93f61-bdef-4ff2-9f14-357a4737f0fa\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.782181 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.785381 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.791634 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-kjhmw" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.811718 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.842643 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcp8\" (UniqueName: \"kubernetes.io/projected/a55050d3-bc38-44be-b873-79b80850217e-kube-api-access-qvcp8\") pod \"ovn-operator-controller-manager-b6456fdb6-jnb5d\" (UID: \"a55050d3-bc38-44be-b873-79b80850217e\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.881632 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.885118 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.950168 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-747r8"] Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.951923 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.952646 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcp8\" (UniqueName: \"kubernetes.io/projected/a55050d3-bc38-44be-b873-79b80850217e-kube-api-access-qvcp8\") pod \"ovn-operator-controller-manager-b6456fdb6-jnb5d\" (UID: \"a55050d3-bc38-44be-b873-79b80850217e\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.957209 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9sbzk" Dec 06 03:22:43 crc kubenswrapper[4801]: I1206 03:22:43.975143 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:43.995080 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:43.995986 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcp8\" (UniqueName: \"kubernetes.io/projected/a55050d3-bc38-44be-b873-79b80850217e-kube-api-access-qvcp8\") pod \"ovn-operator-controller-manager-b6456fdb6-jnb5d\" (UID: \"a55050d3-bc38-44be-b873-79b80850217e\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.024597 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.028278 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.041007 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.042531 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-q6lm6" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.053736 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mvns\" (UniqueName: \"kubernetes.io/projected/1ab19549-8876-40f6-82eb-c29be8d76122-kube-api-access-2mvns\") pod \"swift-operator-controller-manager-5f8c65bbfc-knvll\" (UID: \"1ab19549-8876-40f6-82eb-c29be8d76122\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.053839 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.053887 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4rzl\" (UniqueName: \"kubernetes.io/projected/92a97017-cb01-43fd-ac39-38f0b0f40e44-kube-api-access-r4rzl\") pod \"placement-operator-controller-manager-78f8948974-747r8\" (UID: \"92a97017-cb01-43fd-ac39-38f0b0f40e44\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.054098 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.054143 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert podName:cd4c204b-eb70-4ed7-8800-9c0aa8df0894 nodeName:}" failed. No retries permitted until 2025-12-06 03:22:45.054128287 +0000 UTC m=+1018.176735859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert") pod "infra-operator-controller-manager-78d48bff9d-tb9mp" (UID: "cd4c204b-eb70-4ed7-8800-9c0aa8df0894") : secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.086052 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-747r8"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.095273 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.096951 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.100070 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-t2drx" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.134333 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.156850 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mvns\" (UniqueName: \"kubernetes.io/projected/1ab19549-8876-40f6-82eb-c29be8d76122-kube-api-access-2mvns\") pod \"swift-operator-controller-manager-5f8c65bbfc-knvll\" (UID: \"1ab19549-8876-40f6-82eb-c29be8d76122\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.156952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4rzl\" (UniqueName: \"kubernetes.io/projected/92a97017-cb01-43fd-ac39-38f0b0f40e44-kube-api-access-r4rzl\") pod \"placement-operator-controller-manager-78f8948974-747r8\" (UID: \"92a97017-cb01-43fd-ac39-38f0b0f40e44\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.156981 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk6cg\" (UniqueName: \"kubernetes.io/projected/b755167b-08be-4bcf-bde8-5918264dc691-kube-api-access-vk6cg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-wbqp2\" (UID: \"b755167b-08be-4bcf-bde8-5918264dc691\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.176875 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.177244 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.182832 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mvns\" (UniqueName: \"kubernetes.io/projected/1ab19549-8876-40f6-82eb-c29be8d76122-kube-api-access-2mvns\") pod \"swift-operator-controller-manager-5f8c65bbfc-knvll\" (UID: \"1ab19549-8876-40f6-82eb-c29be8d76122\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.188336 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4rzl\" (UniqueName: \"kubernetes.io/projected/92a97017-cb01-43fd-ac39-38f0b0f40e44-kube-api-access-r4rzl\") pod \"placement-operator-controller-manager-78f8948974-747r8\" (UID: \"92a97017-cb01-43fd-ac39-38f0b0f40e44\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.208956 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.210684 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.215884 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-tq5pc" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.255400 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.260379 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk6cg\" (UniqueName: \"kubernetes.io/projected/b755167b-08be-4bcf-bde8-5918264dc691-kube-api-access-vk6cg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-wbqp2\" (UID: \"b755167b-08be-4bcf-bde8-5918264dc691\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.260443 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.260553 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm8rb\" (UniqueName: \"kubernetes.io/projected/894496e6-3155-4f57-98a1-98a51a1f0f30-kube-api-access-xm8rb\") pod \"test-operator-controller-manager-5854674fcc-vq4xk\" (UID: \"894496e6-3155-4f57-98a1-98a51a1f0f30\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.261432 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.261506 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert podName:16612e3e-2588-413f-b0ff-0a97864485ca nodeName:}" failed. No retries permitted until 2025-12-06 03:22:45.261484682 +0000 UTC m=+1018.384092254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" (UID: "16612e3e-2588-413f-b0ff-0a97864485ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.283450 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.286270 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.288995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk6cg\" (UniqueName: \"kubernetes.io/projected/b755167b-08be-4bcf-bde8-5918264dc691-kube-api-access-vk6cg\") pod \"telemetry-operator-controller-manager-76cc84c6bb-wbqp2\" (UID: \"b755167b-08be-4bcf-bde8-5918264dc691\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.308073 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.308224 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9s2sx" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.324974 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.330810 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.332131 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.344399 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.344494 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jj6b5" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.347124 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.367820 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.367858 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.367894 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkp6\" (UniqueName: \"kubernetes.io/projected/d189f8dd-9d7d-40b5-806b-566da68bf67c-kube-api-access-8wkp6\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.367918 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm8rb\" (UniqueName: \"kubernetes.io/projected/894496e6-3155-4f57-98a1-98a51a1f0f30-kube-api-access-xm8rb\") pod \"test-operator-controller-manager-5854674fcc-vq4xk\" (UID: \"894496e6-3155-4f57-98a1-98a51a1f0f30\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.367990 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlrrf\" (UniqueName: \"kubernetes.io/projected/7793e32c-9749-4b8f-a643-666cfa0783a8-kube-api-access-nlrrf\") pod \"watcher-operator-controller-manager-769dc69bc-cg5wj\" (UID: \"7793e32c-9749-4b8f-a643-666cfa0783a8\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.384371 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.400238 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.413520 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm8rb\" (UniqueName: \"kubernetes.io/projected/894496e6-3155-4f57-98a1-98a51a1f0f30-kube-api-access-xm8rb\") pod \"test-operator-controller-manager-5854674fcc-vq4xk\" (UID: \"894496e6-3155-4f57-98a1-98a51a1f0f30\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.417574 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.418862 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.422575 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gpbwn" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.426564 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.460874 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.469031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlssl\" (UniqueName: \"kubernetes.io/projected/4cf67ce7-dfd4-46b5-98e6-7c6ff303793b-kube-api-access-qlssl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9dn6g\" (UID: \"4cf67ce7-dfd4-46b5-98e6-7c6ff303793b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.469112 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.469145 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.469185 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wkp6\" (UniqueName: \"kubernetes.io/projected/d189f8dd-9d7d-40b5-806b-566da68bf67c-kube-api-access-8wkp6\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.469234 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlrrf\" (UniqueName: \"kubernetes.io/projected/7793e32c-9749-4b8f-a643-666cfa0783a8-kube-api-access-nlrrf\") pod \"watcher-operator-controller-manager-769dc69bc-cg5wj\" (UID: \"7793e32c-9749-4b8f-a643-666cfa0783a8\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.469713 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.476898 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:44.976827741 +0000 UTC m=+1018.099435543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "metrics-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.477337 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: E1206 03:22:44.477379 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:44.977372365 +0000 UTC m=+1018.099979927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "webhook-server-cert" not found Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.498488 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlrrf\" (UniqueName: \"kubernetes.io/projected/7793e32c-9749-4b8f-a643-666cfa0783a8-kube-api-access-nlrrf\") pod \"watcher-operator-controller-manager-769dc69bc-cg5wj\" (UID: \"7793e32c-9749-4b8f-a643-666cfa0783a8\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.520479 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wkp6\" (UniqueName: \"kubernetes.io/projected/d189f8dd-9d7d-40b5-806b-566da68bf67c-kube-api-access-8wkp6\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.529727 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.532992 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.560879 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.588220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlssl\" (UniqueName: \"kubernetes.io/projected/4cf67ce7-dfd4-46b5-98e6-7c6ff303793b-kube-api-access-qlssl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9dn6g\" (UID: \"4cf67ce7-dfd4-46b5-98e6-7c6ff303793b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.607988 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlssl\" (UniqueName: \"kubernetes.io/projected/4cf67ce7-dfd4-46b5-98e6-7c6ff303793b-kube-api-access-qlssl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-9dn6g\" (UID: \"4cf67ce7-dfd4-46b5-98e6-7c6ff303793b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.625203 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.634518 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.680978 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.723476 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.753847 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.771848 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.783962 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p"] Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.790704 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd"] Dec 06 03:22:44 crc kubenswrapper[4801]: W1206 03:22:44.791004 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01a93f61_bdef_4ff2_9f14_357a4737f0fa.slice/crio-551b63da3274933de2f6df1ac3ed7c9a5b0da9891bdc6681105bb05a3d3a7976 WatchSource:0}: Error finding container 551b63da3274933de2f6df1ac3ed7c9a5b0da9891bdc6681105bb05a3d3a7976: Status 404 returned error can't find the container with id 551b63da3274933de2f6df1ac3ed7c9a5b0da9891bdc6681105bb05a3d3a7976 Dec 06 03:22:44 crc kubenswrapper[4801]: I1206 03:22:44.954201 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6"] Dec 06 03:22:44 crc kubenswrapper[4801]: W1206 03:22:44.968915 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod936dc55c_43bb_4e3d_8970_0811d582232a.slice/crio-2ee89b674322038f5f876e463b659011465bbf88de40d135e48eb251241e23f8 WatchSource:0}: Error finding container 2ee89b674322038f5f876e463b659011465bbf88de40d135e48eb251241e23f8: Status 404 returned error can't find the container with id 2ee89b674322038f5f876e463b659011465bbf88de40d135e48eb251241e23f8 Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.010496 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.010537 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.010704 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.010766 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.010797 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:46.01077701 +0000 UTC m=+1019.133384582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "metrics-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.012150 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:46.012130266 +0000 UTC m=+1019.134737838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "webhook-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.031652 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.036420 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.040783 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.111536 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.111762 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.111815 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert podName:cd4c204b-eb70-4ed7-8800-9c0aa8df0894 nodeName:}" failed. No retries permitted until 2025-12-06 03:22:47.111796521 +0000 UTC m=+1020.234404093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert") pod "infra-operator-controller-manager-78d48bff9d-tb9mp" (UID: "cd4c204b-eb70-4ed7-8800-9c0aa8df0894") : secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.134731 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.147085 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.155494 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-747r8"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.174667 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s"] Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.177325 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4rzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-747r8_openstack-operators(92a97017-cb01-43fd-ac39-38f0b0f40e44): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.179966 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r4rzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-747r8_openstack-operators(92a97017-cb01-43fd-ac39-38f0b0f40e44): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.181920 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" podUID="92a97017-cb01-43fd-ac39-38f0b0f40e44" Dec 06 03:22:45 crc kubenswrapper[4801]: W1206 03:22:45.221094 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528abee5_1816_4693_8c8d_ec8addacf287.slice/crio-f4441e0bb9fe7c3cb7338e511c248a8c3e2aa064fa68ea5077d46b39c70bd539 WatchSource:0}: Error finding container f4441e0bb9fe7c3cb7338e511c248a8c3e2aa064fa68ea5077d46b39c70bd539: Status 404 returned error can't find the container with id f4441e0bb9fe7c3cb7338e511c248a8c3e2aa064fa68ea5077d46b39c70bd539 Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.224079 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kbpq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-d2q7s_openstack-operators(528abee5-1816-4693-8c8d-ec8addacf287): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.229832 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kbpq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-d2q7s_openstack-operators(528abee5-1816-4693-8c8d-ec8addacf287): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.231344 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" podUID="528abee5-1816-4693-8c8d-ec8addacf287" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.267040 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" event={"ID":"b755167b-08be-4bcf-bde8-5918264dc691","Type":"ContainerStarted","Data":"ce1543a1a796a4f4f5a57e8a8bbd608c3e459832393ec478f8e9166c59851bba"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.270194 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" event={"ID":"075bc058-a6db-435f-b4da-78d269436fc5","Type":"ContainerStarted","Data":"493293a9e236fa940a396bfac8b03ea1e9522e2f0e1733beed3beb6b289cc6b3"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.271577 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" event={"ID":"67793857-efbd-4ac4-8c3d-0f5f508ae3ee","Type":"ContainerStarted","Data":"a5e9af67fb2d8bfef1dfff724a6b0d506c8bfc0a5ae1e47903d098679298471e"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.272517 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" event={"ID":"c9d1b6fe-6fbe-42b4-b6d3-88864f542000","Type":"ContainerStarted","Data":"e17fc016a66cc87e17ea8c8a0878074063e50221f5ae6b92e594a2506f905121"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.275372 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" event={"ID":"01a93f61-bdef-4ff2-9f14-357a4737f0fa","Type":"ContainerStarted","Data":"551b63da3274933de2f6df1ac3ed7c9a5b0da9891bdc6681105bb05a3d3a7976"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.282420 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" event={"ID":"7cd29fbc-0b7b-4619-97b5-febfdd86a6e2","Type":"ContainerStarted","Data":"72d2b2b077b232fce64224efc42cfbebaa948a297dae74213f33f3a09fa090e9"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.286132 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" event={"ID":"bf89afba-23bf-4d4e-8de6-58be01700897","Type":"ContainerStarted","Data":"e68359dbccc34cc69f752e1edb635fa0290dab74450eb0be613e919879587acc"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.292847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" event={"ID":"adf5388c-f2b1-4cce-9616-03c9ecde87e8","Type":"ContainerStarted","Data":"333f11aebe0a61a4e213c26f5a5c16bb72c5c2d24e2f3cf9db6dabdc52af602d"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.303666 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.314192 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" event={"ID":"6a1623fb-e41b-4fb4-ad84-a9d95a642210","Type":"ContainerStarted","Data":"602be756720d587f77fb68a47d6817ebe9c9862e74ecc05c36dde0ad86c9fd5d"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.319987 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" event={"ID":"528abee5-1816-4693-8c8d-ec8addacf287","Type":"ContainerStarted","Data":"f4441e0bb9fe7c3cb7338e511c248a8c3e2aa064fa68ea5077d46b39c70bd539"} Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.320130 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mvns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-knvll_openstack-operators(1ab19549-8876-40f6-82eb-c29be8d76122): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.321336 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.321478 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.321515 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert podName:16612e3e-2588-413f-b0ff-0a97864485ca nodeName:}" failed. No retries permitted until 2025-12-06 03:22:47.321500548 +0000 UTC m=+1020.444108120 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" (UID: "16612e3e-2588-413f-b0ff-0a97864485ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.323802 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mvns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-knvll_openstack-operators(1ab19549-8876-40f6-82eb-c29be8d76122): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.324304 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" event={"ID":"83987163-dcd4-42d5-98fb-155bc07daf26","Type":"ContainerStarted","Data":"0614e47e98de3f9badb63a0aad670494dcd6e0a67cc9bacd1413f323b68cec45"} Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.325049 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" podUID="1ab19549-8876-40f6-82eb-c29be8d76122" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.325887 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" podUID="528abee5-1816-4693-8c8d-ec8addacf287" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.326975 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" event={"ID":"c84bc554-95d1-4cb3-889e-e3eb348d5b37","Type":"ContainerStarted","Data":"9d2b1f25e7a468cf15680809de68701e5ffe2008244bb167986fe534efe94bd9"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.330045 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" event={"ID":"5e8887af-c61f-4cb5-83ae-c0a62adfb3b2","Type":"ContainerStarted","Data":"ca6eda6d62d82903952d61171976140c1934ab037e4defd253d5d09f9c5530a6"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.332534 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" event={"ID":"92a97017-cb01-43fd-ac39-38f0b0f40e44","Type":"ContainerStarted","Data":"ae32fb54c66521633b8a7716241910ac21bbd1d6e1550e7ecedc59bab03be8e8"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.334127 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" event={"ID":"936dc55c-43bb-4e3d-8970-0811d582232a","Type":"ContainerStarted","Data":"2ee89b674322038f5f876e463b659011465bbf88de40d135e48eb251241e23f8"} Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.335130 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" podUID="92a97017-cb01-43fd-ac39-38f0b0f40e44" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.337270 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" event={"ID":"a55050d3-bc38-44be-b873-79b80850217e","Type":"ContainerStarted","Data":"6cefdf5f7bfcff63bd14be006b6f36685b49a1348cfb06574d731faa8a60c10b"} Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.410144 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj"] Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.431365 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk"] Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.436131 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xm8rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vq4xk_openstack-operators(894496e6-3155-4f57-98a1-98a51a1f0f30): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.439247 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xm8rb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vq4xk_openstack-operators(894496e6-3155-4f57-98a1-98a51a1f0f30): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.440831 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" podUID="894496e6-3155-4f57-98a1-98a51a1f0f30" Dec 06 03:22:45 crc kubenswrapper[4801]: I1206 03:22:45.479595 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g"] Dec 06 03:22:45 crc kubenswrapper[4801]: W1206 03:22:45.482647 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cf67ce7_dfd4_46b5_98e6_7c6ff303793b.slice/crio-bacb52fd888f9454af2ca1b2692ecc5ced2604bb73f8966782504ab6e5e5b2b3 WatchSource:0}: Error finding container bacb52fd888f9454af2ca1b2692ecc5ced2604bb73f8966782504ab6e5e5b2b3: Status 404 returned error can't find the container with id bacb52fd888f9454af2ca1b2692ecc5ced2604bb73f8966782504ab6e5e5b2b3 Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.488287 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlssl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9dn6g_openstack-operators(4cf67ce7-dfd4-46b5-98e6-7c6ff303793b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 03:22:45 crc kubenswrapper[4801]: E1206 03:22:45.489637 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" podUID="4cf67ce7-dfd4-46b5-98e6-7c6ff303793b" Dec 06 03:22:46 crc kubenswrapper[4801]: I1206 03:22:46.031298 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:46 crc kubenswrapper[4801]: I1206 03:22:46.031619 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.031511 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.031722 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:48.031700748 +0000 UTC m=+1021.154308410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "metrics-server-cert" not found Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.031779 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.031838 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:48.031823091 +0000 UTC m=+1021.154430663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "webhook-server-cert" not found Dec 06 03:22:46 crc kubenswrapper[4801]: I1206 03:22:46.345875 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" event={"ID":"4cf67ce7-dfd4-46b5-98e6-7c6ff303793b","Type":"ContainerStarted","Data":"bacb52fd888f9454af2ca1b2692ecc5ced2604bb73f8966782504ab6e5e5b2b3"} Dec 06 03:22:46 crc kubenswrapper[4801]: I1206 03:22:46.347128 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" event={"ID":"7793e32c-9749-4b8f-a643-666cfa0783a8","Type":"ContainerStarted","Data":"445ab83646812bf5b9cbdaf1027fd04e083a3803f2727ecda58d6f7a30481316"} Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.347817 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" podUID="4cf67ce7-dfd4-46b5-98e6-7c6ff303793b" Dec 06 03:22:46 crc kubenswrapper[4801]: I1206 03:22:46.348049 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" event={"ID":"894496e6-3155-4f57-98a1-98a51a1f0f30","Type":"ContainerStarted","Data":"4fe9cef3579d8d0e590d0882c0de3243bdf635cb241221fa176ec96a8498f11b"} Dec 06 03:22:46 crc kubenswrapper[4801]: I1206 03:22:46.349864 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" event={"ID":"1ab19549-8876-40f6-82eb-c29be8d76122","Type":"ContainerStarted","Data":"90c4be4d8e0a585c4f517d5fa91f57d11192689d616e10a783a60ac5bcaa6f96"} Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.350809 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" podUID="894496e6-3155-4f57-98a1-98a51a1f0f30" Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.352698 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" podUID="1ab19549-8876-40f6-82eb-c29be8d76122" Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.353326 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" podUID="92a97017-cb01-43fd-ac39-38f0b0f40e44" Dec 06 03:22:46 crc kubenswrapper[4801]: E1206 03:22:46.353390 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" podUID="528abee5-1816-4693-8c8d-ec8addacf287" Dec 06 03:22:47 crc kubenswrapper[4801]: I1206 03:22:47.147495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.147642 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.147723 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert podName:cd4c204b-eb70-4ed7-8800-9c0aa8df0894 nodeName:}" failed. No retries permitted until 2025-12-06 03:22:51.147702387 +0000 UTC m=+1024.270309969 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert") pod "infra-operator-controller-manager-78d48bff9d-tb9mp" (UID: "cd4c204b-eb70-4ed7-8800-9c0aa8df0894") : secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:47 crc kubenswrapper[4801]: I1206 03:22:47.350106 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.350264 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.351182 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert podName:16612e3e-2588-413f-b0ff-0a97864485ca nodeName:}" failed. No retries permitted until 2025-12-06 03:22:51.351161426 +0000 UTC m=+1024.473768998 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" (UID: "16612e3e-2588-413f-b0ff-0a97864485ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.364407 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" podUID="4cf67ce7-dfd4-46b5-98e6-7c6ff303793b" Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.365482 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" podUID="894496e6-3155-4f57-98a1-98a51a1f0f30" Dec 06 03:22:47 crc kubenswrapper[4801]: E1206 03:22:47.365527 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" podUID="1ab19549-8876-40f6-82eb-c29be8d76122" Dec 06 03:22:48 crc kubenswrapper[4801]: I1206 03:22:48.064619 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:48 crc kubenswrapper[4801]: I1206 03:22:48.064692 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:48 crc kubenswrapper[4801]: E1206 03:22:48.064935 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 03:22:48 crc kubenswrapper[4801]: E1206 03:22:48.064979 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 03:22:48 crc kubenswrapper[4801]: E1206 03:22:48.065054 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:52.065028804 +0000 UTC m=+1025.187636556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "webhook-server-cert" not found Dec 06 03:22:48 crc kubenswrapper[4801]: E1206 03:22:48.065114 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:22:52.065068485 +0000 UTC m=+1025.187676257 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "metrics-server-cert" not found Dec 06 03:22:51 crc kubenswrapper[4801]: I1206 03:22:51.215883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:51 crc kubenswrapper[4801]: E1206 03:22:51.216357 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:51 crc kubenswrapper[4801]: E1206 03:22:51.216401 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert podName:cd4c204b-eb70-4ed7-8800-9c0aa8df0894 nodeName:}" failed. No retries permitted until 2025-12-06 03:22:59.216387284 +0000 UTC m=+1032.338994856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert") pod "infra-operator-controller-manager-78d48bff9d-tb9mp" (UID: "cd4c204b-eb70-4ed7-8800-9c0aa8df0894") : secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:51 crc kubenswrapper[4801]: I1206 03:22:51.418511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:51 crc kubenswrapper[4801]: E1206 03:22:51.418693 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:51 crc kubenswrapper[4801]: E1206 03:22:51.418978 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert podName:16612e3e-2588-413f-b0ff-0a97864485ca nodeName:}" failed. No retries permitted until 2025-12-06 03:22:59.418961089 +0000 UTC m=+1032.541568661 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" (UID: "16612e3e-2588-413f-b0ff-0a97864485ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:52 crc kubenswrapper[4801]: I1206 03:22:52.129333 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:52 crc kubenswrapper[4801]: I1206 03:22:52.129383 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:22:52 crc kubenswrapper[4801]: E1206 03:22:52.129507 4801 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 03:22:52 crc kubenswrapper[4801]: E1206 03:22:52.129510 4801 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 03:22:52 crc kubenswrapper[4801]: E1206 03:22:52.129567 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:23:00.129551789 +0000 UTC m=+1033.252159371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "webhook-server-cert" not found Dec 06 03:22:52 crc kubenswrapper[4801]: E1206 03:22:52.129581 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs podName:d189f8dd-9d7d-40b5-806b-566da68bf67c nodeName:}" failed. No retries permitted until 2025-12-06 03:23:00.12957522 +0000 UTC m=+1033.252182792 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs") pod "openstack-operator-controller-manager-69f8949d4-nwx88" (UID: "d189f8dd-9d7d-40b5-806b-566da68bf67c") : secret "metrics-server-cert" not found Dec 06 03:22:57 crc kubenswrapper[4801]: I1206 03:22:57.223426 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:22:59 crc kubenswrapper[4801]: I1206 03:22:59.266216 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:22:59 crc kubenswrapper[4801]: E1206 03:22:59.267407 4801 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:59 crc kubenswrapper[4801]: E1206 03:22:59.267498 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert podName:cd4c204b-eb70-4ed7-8800-9c0aa8df0894 nodeName:}" failed. No retries permitted until 2025-12-06 03:23:15.267470452 +0000 UTC m=+1048.390078074 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert") pod "infra-operator-controller-manager-78d48bff9d-tb9mp" (UID: "cd4c204b-eb70-4ed7-8800-9c0aa8df0894") : secret "infra-operator-webhook-server-cert" not found Dec 06 03:22:59 crc kubenswrapper[4801]: I1206 03:22:59.469452 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:22:59 crc kubenswrapper[4801]: E1206 03:22:59.469610 4801 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:59 crc kubenswrapper[4801]: E1206 03:22:59.469672 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert podName:16612e3e-2588-413f-b0ff-0a97864485ca nodeName:}" failed. No retries permitted until 2025-12-06 03:23:15.469656967 +0000 UTC m=+1048.592264539 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" (UID: "16612e3e-2588-413f-b0ff-0a97864485ca") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 03:22:59 crc kubenswrapper[4801]: E1206 03:22:59.857709 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 06 03:22:59 crc kubenswrapper[4801]: E1206 03:22:59.858335 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k8mxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-b659z_openstack-operators(83987163-dcd4-42d5-98fb-155bc07daf26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:00 crc kubenswrapper[4801]: I1206 03:23:00.183204 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:00 crc kubenswrapper[4801]: I1206 03:23:00.183309 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:00 crc kubenswrapper[4801]: I1206 03:23:00.192740 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-webhook-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:00 crc kubenswrapper[4801]: I1206 03:23:00.198280 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d189f8dd-9d7d-40b5-806b-566da68bf67c-metrics-certs\") pod \"openstack-operator-controller-manager-69f8949d4-nwx88\" (UID: \"d189f8dd-9d7d-40b5-806b-566da68bf67c\") " pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:00 crc kubenswrapper[4801]: I1206 03:23:00.342313 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-jj6b5" Dec 06 03:23:00 crc kubenswrapper[4801]: I1206 03:23:00.349032 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:01 crc kubenswrapper[4801]: E1206 03:23:01.302917 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 06 03:23:01 crc kubenswrapper[4801]: E1206 03:23:01.303642 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqv4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-6tvst_openstack-operators(bf89afba-23bf-4d4e-8de6-58be01700897): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:01 crc kubenswrapper[4801]: E1206 03:23:01.417148 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/openstack-k8s-operators/cinder-operator:ce694cfbb65ba6795a70d0cc72ce436cb91646de" Dec 06 03:23:01 crc kubenswrapper[4801]: E1206 03:23:01.417250 4801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/openstack-k8s-operators/cinder-operator:ce694cfbb65ba6795a70d0cc72ce436cb91646de" Dec 06 03:23:01 crc kubenswrapper[4801]: E1206 03:23:01.417538 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/cinder-operator:ce694cfbb65ba6795a70d0cc72ce436cb91646de,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srk57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-68dd88d65f-bgnqt_openstack-operators(075bc058-a6db-435f-b4da-78d269436fc5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:02 crc kubenswrapper[4801]: E1206 03:23:02.079284 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 06 03:23:02 crc kubenswrapper[4801]: E1206 03:23:02.079643 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hmvj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-kfdvh_openstack-operators(c9d1b6fe-6fbe-42b4-b6d3-88864f542000): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:02 crc kubenswrapper[4801]: E1206 03:23:02.801725 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 06 03:23:02 crc kubenswrapper[4801]: E1206 03:23:02.801973 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zblvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-fcq4r_openstack-operators(c84bc554-95d1-4cb3-889e-e3eb348d5b37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:03 crc kubenswrapper[4801]: E1206 03:23:03.669368 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Dec 06 03:23:03 crc kubenswrapper[4801]: E1206 03:23:03.669623 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nlrrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-cg5wj_openstack-operators(7793e32c-9749-4b8f-a643-666cfa0783a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:11 crc kubenswrapper[4801]: E1206 03:23:11.151640 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 06 03:23:11 crc kubenswrapper[4801]: E1206 03:23:11.152630 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sb7f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-9gv2p_openstack-operators(5e8887af-c61f-4cb5-83ae-c0a62adfb3b2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:11 crc kubenswrapper[4801]: E1206 03:23:11.827349 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 06 03:23:11 crc kubenswrapper[4801]: E1206 03:23:11.827543 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpd85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-95rj5_openstack-operators(01a93f61-bdef-4ff2-9f14-357a4737f0fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:14 crc kubenswrapper[4801]: E1206 03:23:14.485592 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 06 03:23:14 crc kubenswrapper[4801]: E1206 03:23:14.486186 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvcp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-jnb5d_openstack-operators(a55050d3-bc38-44be-b873-79b80850217e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:14 crc kubenswrapper[4801]: E1206 03:23:14.501746 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 06 03:23:14 crc kubenswrapper[4801]: E1206 03:23:14.502056 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vk6cg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-wbqp2_openstack-operators(b755167b-08be-4bcf-bde8-5918264dc691): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:15 crc kubenswrapper[4801]: E1206 03:23:15.136775 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 06 03:23:15 crc kubenswrapper[4801]: E1206 03:23:15.136986 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-88df9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-zsdh6_openstack-operators(936dc55c-43bb-4e3d-8970-0811d582232a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.317662 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.342052 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cd4c204b-eb70-4ed7-8800-9c0aa8df0894-cert\") pod \"infra-operator-controller-manager-78d48bff9d-tb9mp\" (UID: \"cd4c204b-eb70-4ed7-8800-9c0aa8df0894\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.520519 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.537340 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-ng5q8" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.540557 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/16612e3e-2588-413f-b0ff-0a97864485ca-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd44nznw\" (UID: \"16612e3e-2588-413f-b0ff-0a97864485ca\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.545168 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.567187 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-r2h27" Dec 06 03:23:15 crc kubenswrapper[4801]: I1206 03:23:15.575621 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:23:15 crc kubenswrapper[4801]: E1206 03:23:15.651490 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" Dec 06 03:23:15 crc kubenswrapper[4801]: E1206 03:23:15.651728 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x56rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-56bbcc9d85-xm85p_openstack-operators(67793857-efbd-4ac4-8c3d-0f5f508ae3ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:17 crc kubenswrapper[4801]: E1206 03:23:17.411711 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 06 03:23:17 crc kubenswrapper[4801]: E1206 03:23:17.412281 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2r5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-7qljd_openstack-operators(adf5388c-f2b1-4cce-9616-03c9ecde87e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:18 crc kubenswrapper[4801]: E1206 03:23:18.732858 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 06 03:23:18 crc kubenswrapper[4801]: E1206 03:23:18.733066 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2mvns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-knvll_openstack-operators(1ab19549-8876-40f6-82eb-c29be8d76122): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:23 crc kubenswrapper[4801]: E1206 03:23:23.474919 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 06 03:23:23 crc kubenswrapper[4801]: E1206 03:23:23.477162 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5n6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-5m2qc_openstack-operators(7cd29fbc-0b7b-4619-97b5-febfdd86a6e2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:25 crc kubenswrapper[4801]: E1206 03:23:25.247320 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 06 03:23:25 crc kubenswrapper[4801]: E1206 03:23:25.247524 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qlssl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-9dn6g_openstack-operators(4cf67ce7-dfd4-46b5-98e6-7c6ff303793b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:25 crc kubenswrapper[4801]: E1206 03:23:25.248685 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" podUID="4cf67ce7-dfd4-46b5-98e6-7c6ff303793b" Dec 06 03:23:26 crc kubenswrapper[4801]: E1206 03:23:26.029599 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 06 03:23:26 crc kubenswrapper[4801]: E1206 03:23:26.030325 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kbpq5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-d2q7s_openstack-operators(528abee5-1816-4693-8c8d-ec8addacf287): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:26 crc kubenswrapper[4801]: I1206 03:23:26.400097 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88"] Dec 06 03:23:26 crc kubenswrapper[4801]: I1206 03:23:26.738403 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp"] Dec 06 03:23:26 crc kubenswrapper[4801]: I1206 03:23:26.754642 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw"] Dec 06 03:23:28 crc kubenswrapper[4801]: W1206 03:23:28.967361 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16612e3e_2588_413f_b0ff_0a97864485ca.slice/crio-d433986d1219cc5d4049548dfb28ed2bbfa619aad0450921ffcc46db78a6f4ff WatchSource:0}: Error finding container d433986d1219cc5d4049548dfb28ed2bbfa619aad0450921ffcc46db78a6f4ff: Status 404 returned error can't find the container with id d433986d1219cc5d4049548dfb28ed2bbfa619aad0450921ffcc46db78a6f4ff Dec 06 03:23:28 crc kubenswrapper[4801]: W1206 03:23:28.972111 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd4c204b_eb70_4ed7_8800_9c0aa8df0894.slice/crio-e2e95d59e39ed6d3a457ba68321d9f8a64b0e4ad956a9a4a6c8b3221bc77a525 WatchSource:0}: Error finding container e2e95d59e39ed6d3a457ba68321d9f8a64b0e4ad956a9a4a6c8b3221bc77a525: Status 404 returned error can't find the container with id e2e95d59e39ed6d3a457ba68321d9f8a64b0e4ad956a9a4a6c8b3221bc77a525 Dec 06 03:23:28 crc kubenswrapper[4801]: W1206 03:23:28.977676 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd189f8dd_9d7d_40b5_806b_566da68bf67c.slice/crio-6839be1b9d673a4c121b3293fd8e821c686aa0b7b334c08ddbb02617d2ac1f92 WatchSource:0}: Error finding container 6839be1b9d673a4c121b3293fd8e821c686aa0b7b334c08ddbb02617d2ac1f92: Status 404 returned error can't find the container with id 6839be1b9d673a4c121b3293fd8e821c686aa0b7b334c08ddbb02617d2ac1f92 Dec 06 03:23:29 crc kubenswrapper[4801]: I1206 03:23:29.685597 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" event={"ID":"cd4c204b-eb70-4ed7-8800-9c0aa8df0894","Type":"ContainerStarted","Data":"e2e95d59e39ed6d3a457ba68321d9f8a64b0e4ad956a9a4a6c8b3221bc77a525"} Dec 06 03:23:29 crc kubenswrapper[4801]: I1206 03:23:29.688350 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" event={"ID":"d189f8dd-9d7d-40b5-806b-566da68bf67c","Type":"ContainerStarted","Data":"6839be1b9d673a4c121b3293fd8e821c686aa0b7b334c08ddbb02617d2ac1f92"} Dec 06 03:23:29 crc kubenswrapper[4801]: I1206 03:23:29.690150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" event={"ID":"6a1623fb-e41b-4fb4-ad84-a9d95a642210","Type":"ContainerStarted","Data":"bce8c5b8be3f3f50d40e77f0b581c06ba1bb8073ef61d7b88002862eb70acdfb"} Dec 06 03:23:29 crc kubenswrapper[4801]: I1206 03:23:29.691606 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" event={"ID":"16612e3e-2588-413f-b0ff-0a97864485ca","Type":"ContainerStarted","Data":"d433986d1219cc5d4049548dfb28ed2bbfa619aad0450921ffcc46db78a6f4ff"} Dec 06 03:23:29 crc kubenswrapper[4801]: I1206 03:23:29.695570 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" event={"ID":"92a97017-cb01-43fd-ac39-38f0b0f40e44","Type":"ContainerStarted","Data":"a02cf898b14c644082bdf57a9b259d2aeb662a11885c906bb3a182531a8ce074"} Dec 06 03:23:31 crc kubenswrapper[4801]: I1206 03:23:31.711158 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" event={"ID":"894496e6-3155-4f57-98a1-98a51a1f0f30","Type":"ContainerStarted","Data":"9a14e2b71260c9ce968a60ed299cbbdf7c3eb8cecd18b3af8105f268e27a1c71"} Dec 06 03:23:32 crc kubenswrapper[4801]: I1206 03:23:32.724890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" event={"ID":"d189f8dd-9d7d-40b5-806b-566da68bf67c","Type":"ContainerStarted","Data":"906136c830185f787e76a3cb51da425b0c069e42c0691bf209690a5d87071779"} Dec 06 03:23:32 crc kubenswrapper[4801]: I1206 03:23:32.725602 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:32 crc kubenswrapper[4801]: I1206 03:23:32.759082 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" podStartSLOduration=49.75904856 podStartE2EDuration="49.75904856s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:23:32.756597794 +0000 UTC m=+1065.879205366" watchObservedRunningTime="2025-12-06 03:23:32.75904856 +0000 UTC m=+1065.881656132" Dec 06 03:23:32 crc kubenswrapper[4801]: E1206 03:23:32.964050 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" podUID="c9d1b6fe-6fbe-42b4-b6d3-88864f542000" Dec 06 03:23:32 crc kubenswrapper[4801]: E1206 03:23:32.975550 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" podUID="075bc058-a6db-435f-b4da-78d269436fc5" Dec 06 03:23:32 crc kubenswrapper[4801]: E1206 03:23:32.979887 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" podUID="01a93f61-bdef-4ff2-9f14-357a4737f0fa" Dec 06 03:23:32 crc kubenswrapper[4801]: E1206 03:23:32.980221 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" podUID="83987163-dcd4-42d5-98fb-155bc07daf26" Dec 06 03:23:32 crc kubenswrapper[4801]: E1206 03:23:32.989323 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" podUID="adf5388c-f2b1-4cce-9616-03c9ecde87e8" Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.043628 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" podUID="7793e32c-9749-4b8f-a643-666cfa0783a8" Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.158670 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" podUID="7cd29fbc-0b7b-4619-97b5-febfdd86a6e2" Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.268613 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" podUID="bf89afba-23bf-4d4e-8de6-58be01700897" Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.291082 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" podUID="1ab19549-8876-40f6-82eb-c29be8d76122" Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.403905 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" podUID="5e8887af-c61f-4cb5-83ae-c0a62adfb3b2" Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.412370 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" podUID="67793857-efbd-4ac4-8c3d-0f5f508ae3ee" Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.741536 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" event={"ID":"67793857-efbd-4ac4-8c3d-0f5f508ae3ee","Type":"ContainerStarted","Data":"6595e7376e408be8f8800024ac18750c76d2a249950e28b793621a5778459465"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.748487 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" event={"ID":"5e8887af-c61f-4cb5-83ae-c0a62adfb3b2","Type":"ContainerStarted","Data":"0e0aea3da881b3f30bbaf15d0beaf9db86ccfc78c314dc9ec6299e52eefd1706"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.755722 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" event={"ID":"c9d1b6fe-6fbe-42b4-b6d3-88864f542000","Type":"ContainerStarted","Data":"33c8084af2524c8303cc38a923fc1ccb036bc2809268ef343c2332dda8817cfe"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.767422 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" event={"ID":"894496e6-3155-4f57-98a1-98a51a1f0f30","Type":"ContainerStarted","Data":"3d22d369c933a2bad181da870afdf2c59f0bf170e1e1401def8e62f550e38b86"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.768361 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.790275 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" event={"ID":"1ab19549-8876-40f6-82eb-c29be8d76122","Type":"ContainerStarted","Data":"40dc000a31cf6b5db1370dce8abd19d6796e8993ccfb6134ac2b0f6b9017abc5"} Dec 06 03:23:33 crc kubenswrapper[4801]: E1206 03:23:33.795088 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" podUID="1ab19549-8876-40f6-82eb-c29be8d76122" Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.800098 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" event={"ID":"bf89afba-23bf-4d4e-8de6-58be01700897","Type":"ContainerStarted","Data":"201068d1012200ff177313405f9684880419b5640dea49881316e24e0e8d5a59"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.804222 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" podStartSLOduration=3.71745977 podStartE2EDuration="50.804202817s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.435930389 +0000 UTC m=+1018.558537961" lastFinishedPulling="2025-12-06 03:23:32.522673436 +0000 UTC m=+1065.645281008" observedRunningTime="2025-12-06 03:23:33.800434687 +0000 UTC m=+1066.923042269" watchObservedRunningTime="2025-12-06 03:23:33.804202817 +0000 UTC m=+1066.926810389" Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.817913 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" event={"ID":"83987163-dcd4-42d5-98fb-155bc07daf26","Type":"ContainerStarted","Data":"350a934dec9ddbbe5e4212c8780b53b05293ac10e9a1d7a0d0c6dee70575c789"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.836285 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" event={"ID":"7793e32c-9749-4b8f-a643-666cfa0783a8","Type":"ContainerStarted","Data":"bfbedb626639e9bbf2c85e5ff8d0a8650374a8ec82a8d995c7e91a04d2de4ff8"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.864554 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" event={"ID":"6a1623fb-e41b-4fb4-ad84-a9d95a642210","Type":"ContainerStarted","Data":"5cae5414f72442c78736e34f81df12771b45d07983fbdb093dba2dc033185fab"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.864859 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.883540 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" event={"ID":"16612e3e-2588-413f-b0ff-0a97864485ca","Type":"ContainerStarted","Data":"1a626468e5b8b7447415d8202196d305f7ab2f6864968c937618623a4fa29e8b"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.894614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" event={"ID":"01a93f61-bdef-4ff2-9f14-357a4737f0fa","Type":"ContainerStarted","Data":"5f676b35c765dbf0027fb364754844ffd0efc00c48b4cb65bf2a1222c0c36d6c"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.900896 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" event={"ID":"7cd29fbc-0b7b-4619-97b5-febfdd86a6e2","Type":"ContainerStarted","Data":"9a01d5b766a7c26b56ac93eaf7fd7cb38064e3d6863cc32a702da862bd5f2d95"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.907480 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" event={"ID":"adf5388c-f2b1-4cce-9616-03c9ecde87e8","Type":"ContainerStarted","Data":"4f8d0548048ef9c3fff2158dba5540ae8092bc4d1820e0b5f54c78038a97fc56"} Dec 06 03:23:33 crc kubenswrapper[4801]: I1206 03:23:33.919284 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" event={"ID":"075bc058-a6db-435f-b4da-78d269436fc5","Type":"ContainerStarted","Data":"e9be4d83d1a23d0134317edd9b94d5eb281dbc47afae863607c5fa5ff599309f"} Dec 06 03:23:34 crc kubenswrapper[4801]: I1206 03:23:34.037193 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" podStartSLOduration=3.137835666 podStartE2EDuration="51.037163279s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.4980403 +0000 UTC m=+1017.620647872" lastFinishedPulling="2025-12-06 03:23:32.397367913 +0000 UTC m=+1065.519975485" observedRunningTime="2025-12-06 03:23:34.01818739 +0000 UTC m=+1067.140794962" watchObservedRunningTime="2025-12-06 03:23:34.037163279 +0000 UTC m=+1067.159770851" Dec 06 03:23:34 crc kubenswrapper[4801]: I1206 03:23:34.929505 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-2ft2k" Dec 06 03:23:38 crc kubenswrapper[4801]: E1206 03:23:38.213518 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" podUID="4cf67ce7-dfd4-46b5-98e6-7c6ff303793b" Dec 06 03:23:40 crc kubenswrapper[4801]: I1206 03:23:40.359040 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-69f8949d4-nwx88" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.920248 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" podUID="936dc55c-43bb-4e3d-8970-0811d582232a" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.924369 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.924694 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg9r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-78d48bff9d-tb9mp_openstack-operators(cd4c204b-eb70-4ed7-8800-9c0aa8df0894): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.925696 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" podUID="528abee5-1816-4693-8c8d-ec8addacf287" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.930807 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" podUID="b755167b-08be-4bcf-bde8-5918264dc691" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.932010 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" podUID="a55050d3-bc38-44be-b873-79b80850217e" Dec 06 03:23:43 crc kubenswrapper[4801]: E1206 03:23:43.935590 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" podUID="c84bc554-95d1-4cb3-889e-e3eb348d5b37" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.040968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" event={"ID":"b755167b-08be-4bcf-bde8-5918264dc691","Type":"ContainerStarted","Data":"862e80eec0e507efb5556ce8878c09b5d28cdbc901261f5020081ffbca44644a"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.044646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" event={"ID":"c84bc554-95d1-4cb3-889e-e3eb348d5b37","Type":"ContainerStarted","Data":"6a00bd5b46f66dd99ecb6d9e6111d5c45147b2bb08568b45f36e865985380479"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.055439 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" event={"ID":"16612e3e-2588-413f-b0ff-0a97864485ca","Type":"ContainerStarted","Data":"47e6de833acb1cb6581af801c2571ea00fa39a2888cc3bf3bd0e37cae8f79bca"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.056185 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.066044 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" event={"ID":"92a97017-cb01-43fd-ac39-38f0b0f40e44","Type":"ContainerStarted","Data":"75153470806f48956061bfb00312f3199e78f06fc1a819d380e5a8ca4d453a2c"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.066884 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.077062 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.077113 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.086064 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" event={"ID":"528abee5-1816-4693-8c8d-ec8addacf287","Type":"ContainerStarted","Data":"ee433f974ca56b397931d3dcbaadec5608e3d7c8d26572d024a88cdd5f36d5ec"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.117036 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" event={"ID":"936dc55c-43bb-4e3d-8970-0811d582232a","Type":"ContainerStarted","Data":"a5c8925c8f3f185b7309d2585b55b6c36f87aaa71d9e3f1acd849d4188a0d15a"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.140811 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" event={"ID":"a55050d3-bc38-44be-b873-79b80850217e","Type":"ContainerStarted","Data":"3618b8e412a6f3697f9f186fd4155f9abc96aa990a6df492db87338947bad787"} Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.141313 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-747r8" podStartSLOduration=13.926231422 podStartE2EDuration="1m1.141271383s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.177184415 +0000 UTC m=+1018.299791987" lastFinishedPulling="2025-12-06 03:23:32.392224376 +0000 UTC m=+1065.514831948" observedRunningTime="2025-12-06 03:23:44.133854554 +0000 UTC m=+1077.256462136" watchObservedRunningTime="2025-12-06 03:23:44.141271383 +0000 UTC m=+1077.263878955" Dec 06 03:23:44 crc kubenswrapper[4801]: E1206 03:23:44.190794 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" podUID="528abee5-1816-4693-8c8d-ec8addacf287" Dec 06 03:23:44 crc kubenswrapper[4801]: E1206 03:23:44.241373 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" podUID="cd4c204b-eb70-4ed7-8800-9c0aa8df0894" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.344247 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd44nznw" podStartSLOduration=58.041930009 podStartE2EDuration="1m1.344215739s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:23:28.974986811 +0000 UTC m=+1062.097594383" lastFinishedPulling="2025-12-06 03:23:32.277272541 +0000 UTC m=+1065.399880113" observedRunningTime="2025-12-06 03:23:44.327059139 +0000 UTC m=+1077.449666711" watchObservedRunningTime="2025-12-06 03:23:44.344215739 +0000 UTC m=+1077.466823311" Dec 06 03:23:44 crc kubenswrapper[4801]: I1206 03:23:44.688628 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vq4xk" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.159708 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" event={"ID":"bf89afba-23bf-4d4e-8de6-58be01700897","Type":"ContainerStarted","Data":"ad5bbc0ccb0d0a08ad442ef5e01bb4f247c15fa174211fbfdce63a75965c8249"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.159854 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.172447 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" event={"ID":"cd4c204b-eb70-4ed7-8800-9c0aa8df0894","Type":"ContainerStarted","Data":"3b9a9dc828d53bb7f22075e684559d1ee5d0d7d49b3515e760afd76cdb5db6f3"} Dec 06 03:23:45 crc kubenswrapper[4801]: E1206 03:23:45.174556 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" podUID="cd4c204b-eb70-4ed7-8800-9c0aa8df0894" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.182264 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" event={"ID":"075bc058-a6db-435f-b4da-78d269436fc5","Type":"ContainerStarted","Data":"df393d384746b44cc4e1c74698751c90a8d759347955355f7268bdc802b6f6f1"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.182613 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.194279 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" podStartSLOduration=2.085511615 podStartE2EDuration="1m2.194262021s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.411146228 +0000 UTC m=+1017.533753800" lastFinishedPulling="2025-12-06 03:23:44.519896644 +0000 UTC m=+1077.642504206" observedRunningTime="2025-12-06 03:23:45.189192595 +0000 UTC m=+1078.311800167" watchObservedRunningTime="2025-12-06 03:23:45.194262021 +0000 UTC m=+1078.316869583" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.194865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" event={"ID":"7793e32c-9749-4b8f-a643-666cfa0783a8","Type":"ContainerStarted","Data":"cef687702c79794f1d0e25f904713dbdd34482657f34dc09baac5fd4078fb1d2"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.195255 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.222291 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" event={"ID":"5e8887af-c61f-4cb5-83ae-c0a62adfb3b2","Type":"ContainerStarted","Data":"71a3463799ad70d9048a7b1c713bf6878de360947170264824a854c0433ca7e8"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.222328 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.222338 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" event={"ID":"01a93f61-bdef-4ff2-9f14-357a4737f0fa","Type":"ContainerStarted","Data":"43974103a6be5ff161d603c35f600cf12decba57a17129907362e1264dcddfe8"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.222349 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.241072 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" event={"ID":"7cd29fbc-0b7b-4619-97b5-febfdd86a6e2","Type":"ContainerStarted","Data":"f9ccb2703f30323bafce269ed619f000ccf4e998653228656c4a4b44b446b9f1"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.241829 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.251028 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" event={"ID":"67793857-efbd-4ac4-8c3d-0f5f508ae3ee","Type":"ContainerStarted","Data":"c6a8c419a79dbcfdbe288f6089c96503ba6ec26786fce20f65519ecd1e67b516"} Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.251843 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.253868 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" podStartSLOduration=2.111136923 podStartE2EDuration="1m2.25385203s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.35421079 +0000 UTC m=+1017.476818362" lastFinishedPulling="2025-12-06 03:23:44.496925897 +0000 UTC m=+1077.619533469" observedRunningTime="2025-12-06 03:23:45.251367944 +0000 UTC m=+1078.373975516" watchObservedRunningTime="2025-12-06 03:23:45.25385203 +0000 UTC m=+1078.376459602" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.290070 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" podStartSLOduration=2.586474419 podStartE2EDuration="1m2.290042242s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.787471667 +0000 UTC m=+1017.910079239" lastFinishedPulling="2025-12-06 03:23:44.49103949 +0000 UTC m=+1077.613647062" observedRunningTime="2025-12-06 03:23:45.282967312 +0000 UTC m=+1078.405574884" watchObservedRunningTime="2025-12-06 03:23:45.290042242 +0000 UTC m=+1078.412649814" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.324562 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" podStartSLOduration=2.621537671 podStartE2EDuration="1m2.324542987s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.795530364 +0000 UTC m=+1017.918137936" lastFinishedPulling="2025-12-06 03:23:44.49853568 +0000 UTC m=+1077.621143252" observedRunningTime="2025-12-06 03:23:45.315870755 +0000 UTC m=+1078.438478327" watchObservedRunningTime="2025-12-06 03:23:45.324542987 +0000 UTC m=+1078.447150559" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.353561 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" podStartSLOduration=2.8967310250000002 podStartE2EDuration="1m2.353537745s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.049618302 +0000 UTC m=+1018.172225874" lastFinishedPulling="2025-12-06 03:23:44.506425022 +0000 UTC m=+1077.629032594" observedRunningTime="2025-12-06 03:23:45.3462313 +0000 UTC m=+1078.468838872" watchObservedRunningTime="2025-12-06 03:23:45.353537745 +0000 UTC m=+1078.476145317" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.391297 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" podStartSLOduration=2.905324355 podStartE2EDuration="1m2.391279968s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.034382133 +0000 UTC m=+1018.156989705" lastFinishedPulling="2025-12-06 03:23:44.520337746 +0000 UTC m=+1077.642945318" observedRunningTime="2025-12-06 03:23:45.387524027 +0000 UTC m=+1078.510131599" watchObservedRunningTime="2025-12-06 03:23:45.391279968 +0000 UTC m=+1078.513887540" Dec 06 03:23:45 crc kubenswrapper[4801]: I1206 03:23:45.419968 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" podStartSLOduration=3.414534071 podStartE2EDuration="1m2.419947267s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.421837381 +0000 UTC m=+1018.544444953" lastFinishedPulling="2025-12-06 03:23:44.427250577 +0000 UTC m=+1077.549858149" observedRunningTime="2025-12-06 03:23:45.415318153 +0000 UTC m=+1078.537925715" watchObservedRunningTime="2025-12-06 03:23:45.419947267 +0000 UTC m=+1078.542554839" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.258562 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" event={"ID":"adf5388c-f2b1-4cce-9616-03c9ecde87e8","Type":"ContainerStarted","Data":"d5ebeae877880da24bcd7376ed4327c3833bb5400f531c099e8fa1c920fae4d4"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.259032 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.262099 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" event={"ID":"c84bc554-95d1-4cb3-889e-e3eb348d5b37","Type":"ContainerStarted","Data":"a8712ce14eaed5a385d2d5925231e25e87cf8b7e7e8dc248d9c7eeed88e26b41"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.262919 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.265126 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" event={"ID":"c9d1b6fe-6fbe-42b4-b6d3-88864f542000","Type":"ContainerStarted","Data":"fed1961679b47509598dd0724c8214c8089a25a4e722388399b00e129f29d6b6"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.265268 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.267150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" event={"ID":"936dc55c-43bb-4e3d-8970-0811d582232a","Type":"ContainerStarted","Data":"62b503828b5d2237527fbadb2ba725943d3243c3d955879c59321c166e6f9a05"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.267597 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.269869 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" event={"ID":"a55050d3-bc38-44be-b873-79b80850217e","Type":"ContainerStarted","Data":"90d801cfb9fe5098cf0e8184e1e98358d11feec02e1388f6f09cadc214ba399a"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.270330 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.272594 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" event={"ID":"b755167b-08be-4bcf-bde8-5918264dc691","Type":"ContainerStarted","Data":"d1c12c1acd5ea2b8145b4a5c8b77e1e1d706b0d14e335a8814f27d0ff40116dd"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.273084 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.276108 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" event={"ID":"83987163-dcd4-42d5-98fb-155bc07daf26","Type":"ContainerStarted","Data":"bc3b14e8e8f3015befea150086e65963fbc05aa8fe6b5f4286a326d1fd971574"} Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.276141 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:23:46 crc kubenswrapper[4801]: E1206 03:23:46.284199 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:ccc60d56d8efc2e91a7d8a7131eb7e06c189c32247f2a819818c084ba2e2f2ab\\\"\"" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" podUID="cd4c204b-eb70-4ed7-8800-9c0aa8df0894" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.285660 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" podStartSLOduration=3.6537100799999997 podStartE2EDuration="1m3.2856393s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.866984331 +0000 UTC m=+1017.989591903" lastFinishedPulling="2025-12-06 03:23:44.498913561 +0000 UTC m=+1077.621521123" observedRunningTime="2025-12-06 03:23:46.285244348 +0000 UTC m=+1079.407851940" watchObservedRunningTime="2025-12-06 03:23:46.2856393 +0000 UTC m=+1079.408246872" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.315856 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" podStartSLOduration=3.333450414 podStartE2EDuration="1m3.315833449s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.987801293 +0000 UTC m=+1018.110408865" lastFinishedPulling="2025-12-06 03:23:44.970184338 +0000 UTC m=+1078.092791900" observedRunningTime="2025-12-06 03:23:46.310841286 +0000 UTC m=+1079.433448858" watchObservedRunningTime="2025-12-06 03:23:46.315833449 +0000 UTC m=+1079.438441021" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.358357 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" podStartSLOduration=3.463708841 podStartE2EDuration="1m3.35833473s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.602398551 +0000 UTC m=+1017.725006123" lastFinishedPulling="2025-12-06 03:23:44.49702445 +0000 UTC m=+1077.619632012" observedRunningTime="2025-12-06 03:23:46.335014914 +0000 UTC m=+1079.457622486" watchObservedRunningTime="2025-12-06 03:23:46.35833473 +0000 UTC m=+1079.480942302" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.372977 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" podStartSLOduration=3.451734899 podStartE2EDuration="1m3.372955682s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:44.632962801 +0000 UTC m=+1017.755570373" lastFinishedPulling="2025-12-06 03:23:44.554183584 +0000 UTC m=+1077.676791156" observedRunningTime="2025-12-06 03:23:46.3687775 +0000 UTC m=+1079.491385072" watchObservedRunningTime="2025-12-06 03:23:46.372955682 +0000 UTC m=+1079.495563264" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.401767 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" podStartSLOduration=3.467303887 podStartE2EDuration="1m3.401731085s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.034380963 +0000 UTC m=+1018.156988535" lastFinishedPulling="2025-12-06 03:23:44.968808161 +0000 UTC m=+1078.091415733" observedRunningTime="2025-12-06 03:23:46.39522926 +0000 UTC m=+1079.517836832" watchObservedRunningTime="2025-12-06 03:23:46.401731085 +0000 UTC m=+1079.524338657" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.420734 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" podStartSLOduration=3.640855665 podStartE2EDuration="1m3.420719684s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.158180816 +0000 UTC m=+1018.280788388" lastFinishedPulling="2025-12-06 03:23:44.938044845 +0000 UTC m=+1078.060652407" observedRunningTime="2025-12-06 03:23:46.419526122 +0000 UTC m=+1079.542133694" watchObservedRunningTime="2025-12-06 03:23:46.420719684 +0000 UTC m=+1079.543327256" Dec 06 03:23:46 crc kubenswrapper[4801]: I1206 03:23:46.491897 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" podStartSLOduration=3.801502506 podStartE2EDuration="1m3.491880924s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.175517171 +0000 UTC m=+1018.298124743" lastFinishedPulling="2025-12-06 03:23:44.865895589 +0000 UTC m=+1077.988503161" observedRunningTime="2025-12-06 03:23:46.480388216 +0000 UTC m=+1079.602995788" watchObservedRunningTime="2025-12-06 03:23:46.491880924 +0000 UTC m=+1079.614488496" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.372015 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-6tvst" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.400217 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-68dd88d65f-bgnqt" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.504450 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-b659z" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.552900 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-7qljd" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.560498 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-kfdvh" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.694108 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-zsdh6" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.779572 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-9gv2p" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.887491 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-95rj5" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.983437 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-xm85p" Dec 06 03:23:53 crc kubenswrapper[4801]: I1206 03:23:53.998374 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5m2qc" Dec 06 03:23:54 crc kubenswrapper[4801]: I1206 03:23:54.045650 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-fcq4r" Dec 06 03:23:54 crc kubenswrapper[4801]: I1206 03:23:54.180656 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-jnb5d" Dec 06 03:23:54 crc kubenswrapper[4801]: I1206 03:23:54.429583 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-wbqp2" Dec 06 03:23:54 crc kubenswrapper[4801]: I1206 03:23:54.727407 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cg5wj" Dec 06 03:24:00 crc kubenswrapper[4801]: I1206 03:24:00.391513 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" event={"ID":"4cf67ce7-dfd4-46b5-98e6-7c6ff303793b","Type":"ContainerStarted","Data":"c320ab95a01ed82baa895b3d20d35364f7d477d997fd1e225190b533a2fdeae7"} Dec 06 03:24:00 crc kubenswrapper[4801]: I1206 03:24:00.400327 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" event={"ID":"1ab19549-8876-40f6-82eb-c29be8d76122","Type":"ContainerStarted","Data":"237c8cb67b02b9353cd96933cf8c040d0a0b35e3dc0b56d6e8b03f20aad00ba1"} Dec 06 03:24:00 crc kubenswrapper[4801]: I1206 03:24:00.400601 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:24:00 crc kubenswrapper[4801]: I1206 03:24:00.409250 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-9dn6g" podStartSLOduration=1.7523135490000001 podStartE2EDuration="1m16.409230538s" podCreationTimestamp="2025-12-06 03:22:44 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.488078398 +0000 UTC m=+1018.610685970" lastFinishedPulling="2025-12-06 03:24:00.144995387 +0000 UTC m=+1093.267602959" observedRunningTime="2025-12-06 03:24:00.405111718 +0000 UTC m=+1093.527719290" watchObservedRunningTime="2025-12-06 03:24:00.409230538 +0000 UTC m=+1093.531838110" Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.410278 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" event={"ID":"528abee5-1816-4693-8c8d-ec8addacf287","Type":"ContainerStarted","Data":"b84d7dbb7b8ea9455f8c9062902e624e089100cd14cd7ffe0479726b6934c235"} Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.410558 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.412409 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" event={"ID":"cd4c204b-eb70-4ed7-8800-9c0aa8df0894","Type":"ContainerStarted","Data":"0c12ab5918a1a700f0c9b5de53ec45c6e696a7c841b8df8bfa4ed473010285e1"} Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.412728 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.429677 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" podStartSLOduration=3.36560565 podStartE2EDuration="1m18.429651223s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.223933721 +0000 UTC m=+1018.346541293" lastFinishedPulling="2025-12-06 03:24:00.287979274 +0000 UTC m=+1093.410586866" observedRunningTime="2025-12-06 03:24:01.427854744 +0000 UTC m=+1094.550462346" watchObservedRunningTime="2025-12-06 03:24:01.429651223 +0000 UTC m=+1094.552258795" Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.434509 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" podStartSLOduration=3.609482022 podStartE2EDuration="1m18.434494232s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:22:45.319963427 +0000 UTC m=+1018.442570999" lastFinishedPulling="2025-12-06 03:24:00.144975637 +0000 UTC m=+1093.267583209" observedRunningTime="2025-12-06 03:24:00.438789712 +0000 UTC m=+1093.561397294" watchObservedRunningTime="2025-12-06 03:24:01.434494232 +0000 UTC m=+1094.557101804" Dec 06 03:24:01 crc kubenswrapper[4801]: I1206 03:24:01.449410 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" podStartSLOduration=46.836442599 podStartE2EDuration="1m18.449390362s" podCreationTimestamp="2025-12-06 03:22:43 +0000 UTC" firstStartedPulling="2025-12-06 03:23:28.97570259 +0000 UTC m=+1062.098310162" lastFinishedPulling="2025-12-06 03:24:00.588650353 +0000 UTC m=+1093.711257925" observedRunningTime="2025-12-06 03:24:01.448700743 +0000 UTC m=+1094.571308315" watchObservedRunningTime="2025-12-06 03:24:01.449390362 +0000 UTC m=+1094.571997924" Dec 06 03:24:05 crc kubenswrapper[4801]: I1206 03:24:05.552691 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-tb9mp" Dec 06 03:24:13 crc kubenswrapper[4801]: I1206 03:24:13.816091 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-d2q7s" Dec 06 03:24:14 crc kubenswrapper[4801]: I1206 03:24:14.413204 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-knvll" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.058893 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9kfbw"] Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.060824 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.062937 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.063194 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.063975 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jmwdl" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.065693 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.075524 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9kfbw"] Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.130730 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hlscf"] Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.132593 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.138103 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.142094 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jnt\" (UniqueName: \"kubernetes.io/projected/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-kube-api-access-k2jnt\") pod \"dnsmasq-dns-675f4bcbfc-9kfbw\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.142187 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-config\") pod \"dnsmasq-dns-675f4bcbfc-9kfbw\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.147194 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hlscf"] Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.243468 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvshl\" (UniqueName: \"kubernetes.io/projected/46fec3e3-888d-4a1e-ae38-c66019a091aa-kube-api-access-xvshl\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.243544 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.243595 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-config\") pod \"dnsmasq-dns-675f4bcbfc-9kfbw\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.243613 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-config\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.244402 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2jnt\" (UniqueName: \"kubernetes.io/projected/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-kube-api-access-k2jnt\") pod \"dnsmasq-dns-675f4bcbfc-9kfbw\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.244657 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-config\") pod \"dnsmasq-dns-675f4bcbfc-9kfbw\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.267504 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2jnt\" (UniqueName: \"kubernetes.io/projected/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-kube-api-access-k2jnt\") pod \"dnsmasq-dns-675f4bcbfc-9kfbw\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.345663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-config\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.345932 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvshl\" (UniqueName: \"kubernetes.io/projected/46fec3e3-888d-4a1e-ae38-c66019a091aa-kube-api-access-xvshl\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.345998 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.346947 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.346995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-config\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.363642 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvshl\" (UniqueName: \"kubernetes.io/projected/46fec3e3-888d-4a1e-ae38-c66019a091aa-kube-api-access-xvshl\") pod \"dnsmasq-dns-78dd6ddcc-hlscf\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.381563 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.447355 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.864689 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9kfbw"] Dec 06 03:24:28 crc kubenswrapper[4801]: I1206 03:24:28.942704 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hlscf"] Dec 06 03:24:29 crc kubenswrapper[4801]: I1206 03:24:29.644339 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" event={"ID":"d5492a50-5dc0-4bd7-a7b7-2b6f29843784","Type":"ContainerStarted","Data":"0bc6ea200cb311c8b5395e94532eb0946400911470101b8010e9b01ea9e5bf62"} Dec 06 03:24:29 crc kubenswrapper[4801]: I1206 03:24:29.645943 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" event={"ID":"46fec3e3-888d-4a1e-ae38-c66019a091aa","Type":"ContainerStarted","Data":"79898eee51ffcce6cac742c634cedac35d235308f630b721dd0bb7c5d6c9a942"} Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.119001 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9kfbw"] Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.155888 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kjr5c"] Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.157735 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.165335 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kjr5c"] Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.194231 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-config\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.194309 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnv5b\" (UniqueName: \"kubernetes.io/projected/ee2d2aeb-4219-432e-8164-dfd69500a1cd-kube-api-access-qnv5b\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.194340 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.296770 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-config\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.296833 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnv5b\" (UniqueName: \"kubernetes.io/projected/ee2d2aeb-4219-432e-8164-dfd69500a1cd-kube-api-access-qnv5b\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.296887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.297792 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-config\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.297957 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-dns-svc\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.329723 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnv5b\" (UniqueName: \"kubernetes.io/projected/ee2d2aeb-4219-432e-8164-dfd69500a1cd-kube-api-access-qnv5b\") pod \"dnsmasq-dns-666b6646f7-kjr5c\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.473906 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hlscf"] Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.479193 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.511286 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-857lm"] Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.519229 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.524179 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-857lm"] Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.601668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-config\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.602131 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnnn\" (UniqueName: \"kubernetes.io/projected/08c51594-17aa-4372-b10e-5dfef9eb5f85-kube-api-access-bmnnn\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.602188 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.703739 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.703864 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-config\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.703893 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnnn\" (UniqueName: \"kubernetes.io/projected/08c51594-17aa-4372-b10e-5dfef9eb5f85-kube-api-access-bmnnn\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.705056 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.705203 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-config\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.736910 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnnn\" (UniqueName: \"kubernetes.io/projected/08c51594-17aa-4372-b10e-5dfef9eb5f85-kube-api-access-bmnnn\") pod \"dnsmasq-dns-57d769cc4f-857lm\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:31 crc kubenswrapper[4801]: I1206 03:24:31.853665 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.105547 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kjr5c"] Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.336039 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.338128 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.341024 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.341333 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.341745 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-92ctn" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.342143 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.342324 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.342346 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.343122 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.352938 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.418615 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.418661 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.418686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.418717 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.418999 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.419056 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.419095 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86scx\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-kube-api-access-86scx\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.419177 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.419222 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.419262 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.419286 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.449495 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-857lm"] Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521119 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521206 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521296 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521317 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521336 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521368 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521408 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521427 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521450 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86scx\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-kube-api-access-86scx\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521473 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.521501 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.522739 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.523642 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.523864 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.524201 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.524494 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.524571 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-config-data\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.529733 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.530777 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.531574 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.531798 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.543103 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86scx\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-kube-api-access-86scx\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.549600 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.689583 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.691280 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.693956 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.694534 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.694571 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.695275 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.695485 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.695710 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.695912 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-72jlq" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.696316 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.720194 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.724517 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" event={"ID":"ee2d2aeb-4219-432e-8164-dfd69500a1cd","Type":"ContainerStarted","Data":"cd760bd6876289db3fcdde1b1a7c18de30506afedc0fd1f54a638d0792d7f815"} Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.733342 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" event={"ID":"08c51594-17aa-4372-b10e-5dfef9eb5f85","Type":"ContainerStarted","Data":"7b6aa87d63e34ad26e8ed52cec449024ae0425bbb423799f0a02c33801d66e8a"} Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825630 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825712 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825749 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8p9\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-kube-api-access-gw8p9\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825810 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825839 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825866 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825903 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825938 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.825967 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933708 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933779 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw8p9\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-kube-api-access-gw8p9\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933818 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933847 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933863 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933889 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933925 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.933974 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.934016 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.934042 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.934066 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.934165 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.935247 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.937368 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.938687 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.939292 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.939663 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.951812 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.954631 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.962482 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.964815 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.980980 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:32 crc kubenswrapper[4801]: I1206 03:24:32.992253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw8p9\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-kube-api-access-gw8p9\") pod \"rabbitmq-cell1-server-0\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.060851 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.241428 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:24:33 crc kubenswrapper[4801]: W1206 03:24:33.303198 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e01c6fa_4dee_4835_a73d_30cd5af1a83f.slice/crio-4b1de799b98c21575b5863bdc94b152ffa87cb91e523369b79225eecf3db34f8 WatchSource:0}: Error finding container 4b1de799b98c21575b5863bdc94b152ffa87cb91e523369b79225eecf3db34f8: Status 404 returned error can't find the container with id 4b1de799b98c21575b5863bdc94b152ffa87cb91e523369b79225eecf3db34f8 Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.599581 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:24:33 crc kubenswrapper[4801]: W1206 03:24:33.620484 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8d84a21_b2e6_4d69_9f2b_48870e2d1702.slice/crio-49cc6d4a88a290bb2b65ed5867cd73ab62463f35ff3fcc4785be9a4d0e0be608 WatchSource:0}: Error finding container 49cc6d4a88a290bb2b65ed5867cd73ab62463f35ff3fcc4785be9a4d0e0be608: Status 404 returned error can't find the container with id 49cc6d4a88a290bb2b65ed5867cd73ab62463f35ff3fcc4785be9a4d0e0be608 Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.757491 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e01c6fa-4dee-4835-a73d-30cd5af1a83f","Type":"ContainerStarted","Data":"4b1de799b98c21575b5863bdc94b152ffa87cb91e523369b79225eecf3db34f8"} Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.762054 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8d84a21-b2e6-4d69-9f2b-48870e2d1702","Type":"ContainerStarted","Data":"49cc6d4a88a290bb2b65ed5867cd73ab62463f35ff3fcc4785be9a4d0e0be608"} Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.925899 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.927740 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.934600 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.934871 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.935000 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.935992 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5jsqz" Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.940275 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 03:24:33 crc kubenswrapper[4801]: I1206 03:24:33.946366 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062167 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpgg\" (UniqueName: \"kubernetes.io/projected/463cb826-89ba-4c9d-b4ae-9453464d3ebc-kube-api-access-wrpgg\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062249 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-config-data-default\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062329 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-kolla-config\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062360 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463cb826-89ba-4c9d-b4ae-9453464d3ebc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062410 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062482 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/463cb826-89ba-4c9d-b4ae-9453464d3ebc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062605 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.062666 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/463cb826-89ba-4c9d-b4ae-9453464d3ebc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/463cb826-89ba-4c9d-b4ae-9453464d3ebc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165687 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpgg\" (UniqueName: \"kubernetes.io/projected/463cb826-89ba-4c9d-b4ae-9453464d3ebc-kube-api-access-wrpgg\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165717 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-config-data-default\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-kolla-config\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165803 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463cb826-89ba-4c9d-b4ae-9453464d3ebc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165834 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.165865 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/463cb826-89ba-4c9d-b4ae-9453464d3ebc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.166846 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.167416 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/463cb826-89ba-4c9d-b4ae-9453464d3ebc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.167872 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-kolla-config\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.169741 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.174820 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/463cb826-89ba-4c9d-b4ae-9453464d3ebc-config-data-default\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.200952 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/463cb826-89ba-4c9d-b4ae-9453464d3ebc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.202698 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpgg\" (UniqueName: \"kubernetes.io/projected/463cb826-89ba-4c9d-b4ae-9453464d3ebc-kube-api-access-wrpgg\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.209637 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.228301 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/463cb826-89ba-4c9d-b4ae-9453464d3ebc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"463cb826-89ba-4c9d-b4ae-9453464d3ebc\") " pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.271847 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 03:24:34 crc kubenswrapper[4801]: I1206 03:24:34.971532 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.521349 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.523656 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.528707 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zwfwk" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.529081 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.529182 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.529449 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.542371 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.597955 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.599868 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601658 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601714 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c85c66a1-6bad-499d-8a59-75020d456cd7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601743 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601771 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85c66a1-6bad-499d-8a59-75020d456cd7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601803 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601830 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6lk\" (UniqueName: \"kubernetes.io/projected/c85c66a1-6bad-499d-8a59-75020d456cd7-kube-api-access-tb6lk\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601862 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c66a1-6bad-499d-8a59-75020d456cd7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.601892 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.602414 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-8w9n8" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.602808 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.605834 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.652977 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703537 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703587 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c85c66a1-6bad-499d-8a59-75020d456cd7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703614 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703636 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85c66a1-6bad-499d-8a59-75020d456cd7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703719 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703876 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb6lk\" (UniqueName: \"kubernetes.io/projected/c85c66a1-6bad-499d-8a59-75020d456cd7-kube-api-access-tb6lk\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.704279 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c85c66a1-6bad-499d-8a59-75020d456cd7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.703941 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c66a1-6bad-499d-8a59-75020d456cd7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.705397 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.706239 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.706399 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.706860 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.712402 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c66a1-6bad-499d-8a59-75020d456cd7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.721663 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85c66a1-6bad-499d-8a59-75020d456cd7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.724154 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb6lk\" (UniqueName: \"kubernetes.io/projected/c85c66a1-6bad-499d-8a59-75020d456cd7-kube-api-access-tb6lk\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.725186 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c85c66a1-6bad-499d-8a59-75020d456cd7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.752243 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c85c66a1-6bad-499d-8a59-75020d456cd7\") " pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.807468 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-combined-ca-bundle\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.807527 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-memcached-tls-certs\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.807882 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-kolla-config\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.808031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7jr2\" (UniqueName: \"kubernetes.io/projected/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-kube-api-access-w7jr2\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.808164 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-config-data\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.828343 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"463cb826-89ba-4c9d-b4ae-9453464d3ebc","Type":"ContainerStarted","Data":"7c7f328ca65583fe6a89009771b310ec633573f3d9f93f991c84296d10552215"} Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.900607 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.915555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-memcached-tls-certs\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.915608 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-combined-ca-bundle\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.915677 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-kolla-config\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.915704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7jr2\" (UniqueName: \"kubernetes.io/projected/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-kube-api-access-w7jr2\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.915764 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-config-data\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.916693 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-config-data\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.919868 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-kolla-config\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.930984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-combined-ca-bundle\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.939420 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7jr2\" (UniqueName: \"kubernetes.io/projected/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-kube-api-access-w7jr2\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:35 crc kubenswrapper[4801]: I1206 03:24:35.942118 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/225c5f5f-7422-45ff-a2b8-2b9d3b577d79-memcached-tls-certs\") pod \"memcached-0\" (UID: \"225c5f5f-7422-45ff-a2b8-2b9d3b577d79\") " pod="openstack/memcached-0" Dec 06 03:24:36 crc kubenswrapper[4801]: I1206 03:24:36.234644 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 03:24:36 crc kubenswrapper[4801]: I1206 03:24:36.463490 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 03:24:36 crc kubenswrapper[4801]: I1206 03:24:36.853478 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c85c66a1-6bad-499d-8a59-75020d456cd7","Type":"ContainerStarted","Data":"ad3ccace073b618fb1eaebe23deac16b5c2d7d7458e00eeec569860951182108"} Dec 06 03:24:36 crc kubenswrapper[4801]: I1206 03:24:36.903314 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.166145 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.167142 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.178297 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-fkff4" Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.204474 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.288676 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rnl\" (UniqueName: \"kubernetes.io/projected/f3419971-0654-47d2-befb-5afb0761011c-kube-api-access-w4rnl\") pod \"kube-state-metrics-0\" (UID: \"f3419971-0654-47d2-befb-5afb0761011c\") " pod="openstack/kube-state-metrics-0" Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.395825 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4rnl\" (UniqueName: \"kubernetes.io/projected/f3419971-0654-47d2-befb-5afb0761011c-kube-api-access-w4rnl\") pod \"kube-state-metrics-0\" (UID: \"f3419971-0654-47d2-befb-5afb0761011c\") " pod="openstack/kube-state-metrics-0" Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.429195 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4rnl\" (UniqueName: \"kubernetes.io/projected/f3419971-0654-47d2-befb-5afb0761011c-kube-api-access-w4rnl\") pod \"kube-state-metrics-0\" (UID: \"f3419971-0654-47d2-befb-5afb0761011c\") " pod="openstack/kube-state-metrics-0" Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.502026 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 03:24:37 crc kubenswrapper[4801]: I1206 03:24:37.863809 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"225c5f5f-7422-45ff-a2b8-2b9d3b577d79","Type":"ContainerStarted","Data":"ea60a88ee9c904502598e5db596c2df60a98d24bb8a441b731e72a14ab0ddbfe"} Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.005573 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qqlb5"] Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.007711 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.011601 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.012008 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.016650 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vrjqs" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.018052 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqlb5"] Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.066979 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-scripts\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.067092 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-run\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.067145 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgb7\" (UniqueName: \"kubernetes.io/projected/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-kube-api-access-fhgb7\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.067177 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-run-ovn\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.067236 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-combined-ca-bundle\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.067453 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-ovn-controller-tls-certs\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.067532 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-log-ovn\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.116724 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-44f28"] Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.118729 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.133908 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-44f28"] Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169111 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-etc-ovs\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc5nw\" (UniqueName: \"kubernetes.io/projected/276fe396-a90f-4c5b-83ce-ac17c7617e63-kube-api-access-dc5nw\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169323 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-lib\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169413 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-scripts\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169492 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/276fe396-a90f-4c5b-83ce-ac17c7617e63-scripts\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-run\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169657 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-run\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169701 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169719 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgb7\" (UniqueName: \"kubernetes.io/projected/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-kube-api-access-fhgb7\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169766 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-run-ovn\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169851 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-combined-ca-bundle\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169893 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-ovn-controller-tls-certs\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169934 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-log\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.169965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-log-ovn\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.170391 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-run-ovn\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.170464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-log-ovn\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.171362 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-var-run\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.171825 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-scripts\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.181611 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-combined-ca-bundle\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.181732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-ovn-controller-tls-certs\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.190190 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgb7\" (UniqueName: \"kubernetes.io/projected/eefe8d7e-f739-42c8-88fb-2c27a8630e8b-kube-api-access-fhgb7\") pod \"ovn-controller-qqlb5\" (UID: \"eefe8d7e-f739-42c8-88fb-2c27a8630e8b\") " pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272195 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-log\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272326 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-etc-ovs\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272360 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc5nw\" (UniqueName: \"kubernetes.io/projected/276fe396-a90f-4c5b-83ce-ac17c7617e63-kube-api-access-dc5nw\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272394 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-lib\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272439 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/276fe396-a90f-4c5b-83ce-ac17c7617e63-scripts\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-run\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272685 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-run\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272824 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-etc-ovs\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-log\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.272906 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/276fe396-a90f-4c5b-83ce-ac17c7617e63-var-lib\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.275913 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/276fe396-a90f-4c5b-83ce-ac17c7617e63-scripts\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.299263 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc5nw\" (UniqueName: \"kubernetes.io/projected/276fe396-a90f-4c5b-83ce-ac17c7617e63-kube-api-access-dc5nw\") pod \"ovn-controller-ovs-44f28\" (UID: \"276fe396-a90f-4c5b-83ce-ac17c7617e63\") " pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.341922 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.434224 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.468067 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.469847 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.477563 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.477599 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.477674 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.483943 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8qh2f" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.484546 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.490292 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.577916 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.577988 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.578031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dd4e7515-f487-4c9e-b405-a5f61022d5e5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.578061 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xxz2\" (UniqueName: \"kubernetes.io/projected/dd4e7515-f487-4c9e-b405-a5f61022d5e5-kube-api-access-6xxz2\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.578084 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.578124 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.578154 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd4e7515-f487-4c9e-b405-a5f61022d5e5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.578470 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4e7515-f487-4c9e-b405-a5f61022d5e5-config\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680240 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680358 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd4e7515-f487-4c9e-b405-a5f61022d5e5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680421 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4e7515-f487-4c9e-b405-a5f61022d5e5-config\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680475 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680506 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680548 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dd4e7515-f487-4c9e-b405-a5f61022d5e5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680596 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xxz2\" (UniqueName: \"kubernetes.io/projected/dd4e7515-f487-4c9e-b405-a5f61022d5e5-kube-api-access-6xxz2\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.680624 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.681198 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.681967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd4e7515-f487-4c9e-b405-a5f61022d5e5-config\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.682379 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dd4e7515-f487-4c9e-b405-a5f61022d5e5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.682664 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd4e7515-f487-4c9e-b405-a5f61022d5e5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.685649 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.685729 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.687088 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4e7515-f487-4c9e-b405-a5f61022d5e5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.705321 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xxz2\" (UniqueName: \"kubernetes.io/projected/dd4e7515-f487-4c9e-b405-a5f61022d5e5-kube-api-access-6xxz2\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.716400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-nb-0\" (UID: \"dd4e7515-f487-4c9e-b405-a5f61022d5e5\") " pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:41 crc kubenswrapper[4801]: I1206 03:24:41.791323 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.899707 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.905032 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.909331 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-sb9r6" Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.909331 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.909419 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.909475 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 06 03:24:44 crc kubenswrapper[4801]: I1206 03:24:44.911652 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.045104 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.045162 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.045638 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvdch\" (UniqueName: \"kubernetes.io/projected/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-kube-api-access-vvdch\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.045872 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.046049 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.046171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.046380 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.046640 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.149365 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.149883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.150013 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.150545 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.151307 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.151743 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvdch\" (UniqueName: \"kubernetes.io/projected/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-kube-api-access-vvdch\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.151936 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.152059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.152165 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.152318 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.152792 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-config\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.152901 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.159592 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.159647 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.161894 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.178625 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.179929 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvdch\" (UniqueName: \"kubernetes.io/projected/f4e4cd15-b8c1-4521-82f7-d54fb0141c9b-kube-api-access-vvdch\") pod \"ovsdbserver-sb-0\" (UID: \"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b\") " pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:45 crc kubenswrapper[4801]: I1206 03:24:45.272248 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 03:24:52 crc kubenswrapper[4801]: I1206 03:24:52.644495 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:24:58 crc kubenswrapper[4801]: E1206 03:24:58.772309 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 06 03:24:58 crc kubenswrapper[4801]: E1206 03:24:58.773292 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw8p9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(b8d84a21-b2e6-4d69-9f2b-48870e2d1702): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:24:58 crc kubenswrapper[4801]: E1206 03:24:58.774482 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" Dec 06 03:24:59 crc kubenswrapper[4801]: E1206 03:24:59.063207 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" Dec 06 03:24:59 crc kubenswrapper[4801]: W1206 03:24:59.624005 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3419971_0654_47d2_befb_5afb0761011c.slice/crio-893e6516bc8141a6f5bcfe836bfb1b7afdc547bcfaaca04bf438629084df93f7 WatchSource:0}: Error finding container 893e6516bc8141a6f5bcfe836bfb1b7afdc547bcfaaca04bf438629084df93f7: Status 404 returned error can't find the container with id 893e6516bc8141a6f5bcfe836bfb1b7afdc547bcfaaca04bf438629084df93f7 Dec 06 03:25:00 crc kubenswrapper[4801]: I1206 03:25:00.068666 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3419971-0654-47d2-befb-5afb0761011c","Type":"ContainerStarted","Data":"893e6516bc8141a6f5bcfe836bfb1b7afdc547bcfaaca04bf438629084df93f7"} Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.331402 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.332484 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2jnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-9kfbw_openstack(d5492a50-5dc0-4bd7-a7b7-2b6f29843784): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.333795 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" podUID="d5492a50-5dc0-4bd7-a7b7-2b6f29843784" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.346304 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.346454 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnv5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-kjr5c_openstack(ee2d2aeb-4219-432e-8164-dfd69500a1cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.347781 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" podUID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.448345 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.448583 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvshl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-hlscf_openstack(46fec3e3-888d-4a1e-ae38-c66019a091aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.449830 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" podUID="46fec3e3-888d-4a1e-ae38-c66019a091aa" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.614213 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.615728 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmnnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-857lm_openstack(08c51594-17aa-4372-b10e-5dfef9eb5f85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:07 crc kubenswrapper[4801]: E1206 03:25:07.617679 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" podUID="08c51594-17aa-4372-b10e-5dfef9eb5f85" Dec 06 03:25:08 crc kubenswrapper[4801]: E1206 03:25:08.154421 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" podUID="08c51594-17aa-4372-b10e-5dfef9eb5f85" Dec 06 03:25:08 crc kubenswrapper[4801]: E1206 03:25:08.154570 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" podUID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" Dec 06 03:25:08 crc kubenswrapper[4801]: I1206 03:25:08.331300 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqlb5"] Dec 06 03:25:08 crc kubenswrapper[4801]: I1206 03:25:08.345857 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 03:25:08 crc kubenswrapper[4801]: I1206 03:25:08.369537 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 03:25:08 crc kubenswrapper[4801]: W1206 03:25:08.833702 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4e4cd15_b8c1_4521_82f7_d54fb0141c9b.slice/crio-66ce32668228f33ace7a05f228409fe1fa45cf19fceba21b0839a8f9a2d8e180 WatchSource:0}: Error finding container 66ce32668228f33ace7a05f228409fe1fa45cf19fceba21b0839a8f9a2d8e180: Status 404 returned error can't find the container with id 66ce32668228f33ace7a05f228409fe1fa45cf19fceba21b0839a8f9a2d8e180 Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:08.915092 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:08.920796 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:08.935339 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2jnt\" (UniqueName: \"kubernetes.io/projected/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-kube-api-access-k2jnt\") pod \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:08.935521 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-config\") pod \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\" (UID: \"d5492a50-5dc0-4bd7-a7b7-2b6f29843784\") " Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:08.936088 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-config" (OuterVolumeSpecName: "config") pod "d5492a50-5dc0-4bd7-a7b7-2b6f29843784" (UID: "d5492a50-5dc0-4bd7-a7b7-2b6f29843784"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.412377 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvshl\" (UniqueName: \"kubernetes.io/projected/46fec3e3-888d-4a1e-ae38-c66019a091aa-kube-api-access-xvshl\") pod \"46fec3e3-888d-4a1e-ae38-c66019a091aa\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.412439 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-dns-svc\") pod \"46fec3e3-888d-4a1e-ae38-c66019a091aa\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.412488 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-config\") pod \"46fec3e3-888d-4a1e-ae38-c66019a091aa\" (UID: \"46fec3e3-888d-4a1e-ae38-c66019a091aa\") " Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.412960 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.413472 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-config" (OuterVolumeSpecName: "config") pod "46fec3e3-888d-4a1e-ae38-c66019a091aa" (UID: "46fec3e3-888d-4a1e-ae38-c66019a091aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.413836 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46fec3e3-888d-4a1e-ae38-c66019a091aa" (UID: "46fec3e3-888d-4a1e-ae38-c66019a091aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.437143 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.438899 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.461675 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-kube-api-access-k2jnt" (OuterVolumeSpecName: "kube-api-access-k2jnt") pod "d5492a50-5dc0-4bd7-a7b7-2b6f29843784" (UID: "d5492a50-5dc0-4bd7-a7b7-2b6f29843784"). InnerVolumeSpecName "kube-api-access-k2jnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.465567 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fec3e3-888d-4a1e-ae38-c66019a091aa-kube-api-access-xvshl" (OuterVolumeSpecName: "kube-api-access-xvshl") pod "46fec3e3-888d-4a1e-ae38-c66019a091aa" (UID: "46fec3e3-888d-4a1e-ae38-c66019a091aa"). InnerVolumeSpecName "kube-api-access-xvshl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.514952 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvshl\" (UniqueName: \"kubernetes.io/projected/46fec3e3-888d-4a1e-ae38-c66019a091aa-kube-api-access-xvshl\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.514996 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.515010 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fec3e3-888d-4a1e-ae38-c66019a091aa-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.515023 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2jnt\" (UniqueName: \"kubernetes.io/projected/d5492a50-5dc0-4bd7-a7b7-2b6f29843784-kube-api-access-k2jnt\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532699 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"dd4e7515-f487-4c9e-b405-a5f61022d5e5","Type":"ContainerStarted","Data":"f61b75bede25fb73a8f99baba9ced79bcfb32154d311a3d23a0ae879ee92dc76"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532740 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b","Type":"ContainerStarted","Data":"66ce32668228f33ace7a05f228409fe1fa45cf19fceba21b0839a8f9a2d8e180"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532774 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-44f28"] Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532798 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"463cb826-89ba-4c9d-b4ae-9453464d3ebc","Type":"ContainerStarted","Data":"1d2ee6f45706dcf2eb6e1b0d19ca99b7e652b109d5002314cd61270db02942ff"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532820 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"225c5f5f-7422-45ff-a2b8-2b9d3b577d79","Type":"ContainerStarted","Data":"82196f097511d91304895063b9442c08f59377a0bb80a9121545a42bc9411c46"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532831 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c85c66a1-6bad-499d-8a59-75020d456cd7","Type":"ContainerStarted","Data":"bfe1e8f8123f93d565059ce5a88a390a5c7ad059e1d2399d35f80e20e99e0a2b"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532844 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-9kfbw" event={"ID":"d5492a50-5dc0-4bd7-a7b7-2b6f29843784","Type":"ContainerDied","Data":"0bc6ea200cb311c8b5395e94532eb0946400911470101b8010e9b01ea9e5bf62"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532885 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hlscf" event={"ID":"46fec3e3-888d-4a1e-ae38-c66019a091aa","Type":"ContainerDied","Data":"79898eee51ffcce6cac742c634cedac35d235308f630b721dd0bb7c5d6c9a942"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.532901 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5" event={"ID":"eefe8d7e-f739-42c8-88fb-2c27a8630e8b","Type":"ContainerStarted","Data":"09d32f7616880873b839950054ee93514199f5372608a36640b25c62213ec4ee"} Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.799034 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9kfbw"] Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.804419 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-9kfbw"] Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.890412 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hlscf"] Dec 06 03:25:09 crc kubenswrapper[4801]: I1206 03:25:09.898324 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hlscf"] Dec 06 03:25:10 crc kubenswrapper[4801]: I1206 03:25:10.452892 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 06 03:25:11 crc kubenswrapper[4801]: I1206 03:25:11.170418 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:25:11 crc kubenswrapper[4801]: I1206 03:25:11.170519 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:25:11 crc kubenswrapper[4801]: I1206 03:25:11.233616 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fec3e3-888d-4a1e-ae38-c66019a091aa" path="/var/lib/kubelet/pods/46fec3e3-888d-4a1e-ae38-c66019a091aa/volumes" Dec 06 03:25:11 crc kubenswrapper[4801]: I1206 03:25:11.234631 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5492a50-5dc0-4bd7-a7b7-2b6f29843784" path="/var/lib/kubelet/pods/d5492a50-5dc0-4bd7-a7b7-2b6f29843784/volumes" Dec 06 03:25:12 crc kubenswrapper[4801]: I1206 03:25:12.474416 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e01c6fa-4dee-4835-a73d-30cd5af1a83f","Type":"ContainerStarted","Data":"fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795"} Dec 06 03:25:12 crc kubenswrapper[4801]: I1206 03:25:12.502461 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=6.902385698 podStartE2EDuration="37.502428913s" podCreationTimestamp="2025-12-06 03:24:35 +0000 UTC" firstStartedPulling="2025-12-06 03:24:36.988134203 +0000 UTC m=+1130.110741775" lastFinishedPulling="2025-12-06 03:25:07.588177388 +0000 UTC m=+1160.710784990" observedRunningTime="2025-12-06 03:25:10.491063092 +0000 UTC m=+1163.613670704" watchObservedRunningTime="2025-12-06 03:25:12.502428913 +0000 UTC m=+1165.625036525" Dec 06 03:25:14 crc kubenswrapper[4801]: W1206 03:25:14.036094 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod276fe396_a90f_4c5b_83ce_ac17c7617e63.slice/crio-795d5887ba6e5720336ad66338cb0534a284c112781cd48fb4729b07b43479dd WatchSource:0}: Error finding container 795d5887ba6e5720336ad66338cb0534a284c112781cd48fb4729b07b43479dd: Status 404 returned error can't find the container with id 795d5887ba6e5720336ad66338cb0534a284c112781cd48fb4729b07b43479dd Dec 06 03:25:14 crc kubenswrapper[4801]: I1206 03:25:14.500394 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-44f28" event={"ID":"276fe396-a90f-4c5b-83ce-ac17c7617e63","Type":"ContainerStarted","Data":"795d5887ba6e5720336ad66338cb0534a284c112781cd48fb4729b07b43479dd"} Dec 06 03:25:16 crc kubenswrapper[4801]: I1206 03:25:16.237038 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 06 03:25:16 crc kubenswrapper[4801]: I1206 03:25:16.526023 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8d84a21-b2e6-4d69-9f2b-48870e2d1702","Type":"ContainerStarted","Data":"103e39ea3c7868e8eeec409ec28e22cd8888fb8366b8635d350b4e3f90de6af7"} Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.147732 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gv5hg"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.149529 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.152359 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.173796 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gv5hg"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.252985 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cc7364-fab1-449d-9939-020c58f7e9af-config\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.253062 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00cc7364-fab1-449d-9939-020c58f7e9af-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.253103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8w9n\" (UniqueName: \"kubernetes.io/projected/00cc7364-fab1-449d-9939-020c58f7e9af-kube-api-access-s8w9n\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.253147 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc7364-fab1-449d-9939-020c58f7e9af-combined-ca-bundle\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.253213 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/00cc7364-fab1-449d-9939-020c58f7e9af-ovn-rundir\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.253269 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/00cc7364-fab1-449d-9939-020c58f7e9af-ovs-rundir\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.317328 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-857lm"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.337077 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6r2z6"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.338848 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.343366 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357242 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/00cc7364-fab1-449d-9939-020c58f7e9af-ovs-rundir\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357449 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cc7364-fab1-449d-9939-020c58f7e9af-config\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357544 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00cc7364-fab1-449d-9939-020c58f7e9af-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357608 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8w9n\" (UniqueName: \"kubernetes.io/projected/00cc7364-fab1-449d-9939-020c58f7e9af-kube-api-access-s8w9n\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357673 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc7364-fab1-449d-9939-020c58f7e9af-combined-ca-bundle\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357837 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/00cc7364-fab1-449d-9939-020c58f7e9af-ovn-rundir\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.357898 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/00cc7364-fab1-449d-9939-020c58f7e9af-ovs-rundir\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.358096 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/00cc7364-fab1-449d-9939-020c58f7e9af-ovn-rundir\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.359708 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00cc7364-fab1-449d-9939-020c58f7e9af-config\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.375423 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00cc7364-fab1-449d-9939-020c58f7e9af-combined-ca-bundle\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.379995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8w9n\" (UniqueName: \"kubernetes.io/projected/00cc7364-fab1-449d-9939-020c58f7e9af-kube-api-access-s8w9n\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.388127 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6r2z6"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.394006 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/00cc7364-fab1-449d-9939-020c58f7e9af-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gv5hg\" (UID: \"00cc7364-fab1-449d-9939-020c58f7e9af\") " pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.460821 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-config\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.460866 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.460925 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4kl\" (UniqueName: \"kubernetes.io/projected/26b350e2-f52b-4ebd-a380-1d7120345323-kube-api-access-xf4kl\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.460974 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.472121 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kjr5c"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.491867 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gv5hg" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.500472 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bltz2"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.502016 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.508502 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.525055 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bltz2"] Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.564814 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.564945 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-config\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.564972 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.565030 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.565063 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km8sh\" (UniqueName: \"kubernetes.io/projected/9b0adc6a-9d6a-4226-b807-79f3d905925a-kube-api-access-km8sh\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.565107 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-config\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.565137 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.565164 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4kl\" (UniqueName: \"kubernetes.io/projected/26b350e2-f52b-4ebd-a380-1d7120345323-kube-api-access-xf4kl\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.565201 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.566571 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.567333 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.567939 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-config\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.613823 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4kl\" (UniqueName: \"kubernetes.io/projected/26b350e2-f52b-4ebd-a380-1d7120345323-kube-api-access-xf4kl\") pod \"dnsmasq-dns-7fd796d7df-6r2z6\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.662224 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.666931 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.667994 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.668114 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.668142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km8sh\" (UniqueName: \"kubernetes.io/projected/9b0adc6a-9d6a-4226-b807-79f3d905925a-kube-api-access-km8sh\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.668176 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-config\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.668193 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.668740 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.669268 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.669951 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-config\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.690363 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km8sh\" (UniqueName: \"kubernetes.io/projected/9b0adc6a-9d6a-4226-b807-79f3d905925a-kube-api-access-km8sh\") pod \"dnsmasq-dns-86db49b7ff-bltz2\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:26 crc kubenswrapper[4801]: I1206 03:25:25.827880 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.671238 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.671972 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cbh5ch85h599h54h5f6h57bh7dh67ch67bhfh7bh58fh5ddh6fh689h569h5b9h56bh4hb5h5cfh699h55h698h5cbh679hb9h689hd6h545hbfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhgb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-qqlb5_openstack(eefe8d7e-f739-42c8-88fb-2c27a8630e8b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.673152 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-qqlb5" podUID="eefe8d7e-f739-42c8-88fb-2c27a8630e8b" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.738610 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-qqlb5" podUID="eefe8d7e-f739-42c8-88fb-2c27a8630e8b" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.741876 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.742142 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cbh5ch85h599h54h5f6h57bh7dh67ch67bhfh7bh58fh5ddh6fh689h569h5b9h56bh4hb5h5cfh699h55h698h5cbh679hb9h689hd6h545hbfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dc5nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-44f28_openstack(276fe396-a90f-4c5b-83ce-ac17c7617e63): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:39 crc kubenswrapper[4801]: E1206 03:25:39.743911 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-44f28" podUID="276fe396-a90f-4c5b-83ce-ac17c7617e63" Dec 06 03:25:40 crc kubenswrapper[4801]: E1206 03:25:40.083279 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Dec 06 03:25:40 crc kubenswrapper[4801]: E1206 03:25:40.083921 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cbh67fh58dhcdh598h7bh686hbhcbh59h6ch68dh96hb7h9bh575h577h669h56h58dh99hd4h95h6dh648h666h5h5f9h9ch54bh545h59q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvdch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(f4e4cd15-b8c1-4521-82f7-d54fb0141c9b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:25:40 crc kubenswrapper[4801]: I1206 03:25:40.575902 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6r2z6"] Dec 06 03:25:40 crc kubenswrapper[4801]: W1206 03:25:40.608338 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26b350e2_f52b_4ebd_a380_1d7120345323.slice/crio-187ba4fec60487f12db745014fa07dc1130d812b8800835d8f0037810a0bca9f WatchSource:0}: Error finding container 187ba4fec60487f12db745014fa07dc1130d812b8800835d8f0037810a0bca9f: Status 404 returned error can't find the container with id 187ba4fec60487f12db745014fa07dc1130d812b8800835d8f0037810a0bca9f Dec 06 03:25:40 crc kubenswrapper[4801]: I1206 03:25:40.756654 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" event={"ID":"26b350e2-f52b-4ebd-a380-1d7120345323","Type":"ContainerStarted","Data":"187ba4fec60487f12db745014fa07dc1130d812b8800835d8f0037810a0bca9f"} Dec 06 03:25:40 crc kubenswrapper[4801]: E1206 03:25:40.758497 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-44f28" podUID="276fe396-a90f-4c5b-83ce-ac17c7617e63" Dec 06 03:25:40 crc kubenswrapper[4801]: I1206 03:25:40.798280 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gv5hg"] Dec 06 03:25:40 crc kubenswrapper[4801]: I1206 03:25:40.899201 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bltz2"] Dec 06 03:25:41 crc kubenswrapper[4801]: W1206 03:25:41.077202 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00cc7364_fab1_449d_9939_020c58f7e9af.slice/crio-609e32983f0ed0c4f4aefa3b3018171ad52a5be9bc6a4e589efc37b112d35d14 WatchSource:0}: Error finding container 609e32983f0ed0c4f4aefa3b3018171ad52a5be9bc6a4e589efc37b112d35d14: Status 404 returned error can't find the container with id 609e32983f0ed0c4f4aefa3b3018171ad52a5be9bc6a4e589efc37b112d35d14 Dec 06 03:25:41 crc kubenswrapper[4801]: E1206 03:25:41.081570 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 06 03:25:41 crc kubenswrapper[4801]: E1206 03:25:41.081649 4801 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 06 03:25:41 crc kubenswrapper[4801]: E1206 03:25:41.081909 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w4rnl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(f3419971-0654-47d2-befb-5afb0761011c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 03:25:41 crc kubenswrapper[4801]: E1206 03:25:41.083210 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="f3419971-0654-47d2-befb-5afb0761011c" Dec 06 03:25:41 crc kubenswrapper[4801]: W1206 03:25:41.084935 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b0adc6a_9d6a_4226_b807_79f3d905925a.slice/crio-04ab610e0d9a226b1aa8999878c03ad5b7d62ff5ecc8708a4f601fe4cde626af WatchSource:0}: Error finding container 04ab610e0d9a226b1aa8999878c03ad5b7d62ff5ecc8708a4f601fe4cde626af: Status 404 returned error can't find the container with id 04ab610e0d9a226b1aa8999878c03ad5b7d62ff5ecc8708a4f601fe4cde626af Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.169684 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.169770 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.169825 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.170646 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff81fd67675c4763c098dcc0a53f067a4ce5fbfac499868e5be530bd2f0ce8c0"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.170719 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://ff81fd67675c4763c098dcc0a53f067a4ce5fbfac499868e5be530bd2f0ce8c0" gracePeriod=600 Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.765624 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"dd4e7515-f487-4c9e-b405-a5f61022d5e5","Type":"ContainerStarted","Data":"77fd833b7efee86b26348fc68ca44f79009b590f12941e089df013c690336c1b"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.768057 4801 generic.go:334] "Generic (PLEG): container finished" podID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" containerID="09fa4f110d7360decd605ea99a54c92bda23ee2b23ab15398187f229e1e0ab64" exitCode=0 Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.768160 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" event={"ID":"ee2d2aeb-4219-432e-8164-dfd69500a1cd","Type":"ContainerDied","Data":"09fa4f110d7360decd605ea99a54c92bda23ee2b23ab15398187f229e1e0ab64"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.781830 4801 generic.go:334] "Generic (PLEG): container finished" podID="26b350e2-f52b-4ebd-a380-1d7120345323" containerID="71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11" exitCode=0 Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.782312 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" event={"ID":"26b350e2-f52b-4ebd-a380-1d7120345323","Type":"ContainerDied","Data":"71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.791356 4801 generic.go:334] "Generic (PLEG): container finished" podID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerID="a867bcc693f7f30778834124c97a6d22271f07934e704abe63267f90b7d740e2" exitCode=0 Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.791718 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" event={"ID":"9b0adc6a-9d6a-4226-b807-79f3d905925a","Type":"ContainerDied","Data":"a867bcc693f7f30778834124c97a6d22271f07934e704abe63267f90b7d740e2"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.791801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" event={"ID":"9b0adc6a-9d6a-4226-b807-79f3d905925a","Type":"ContainerStarted","Data":"04ab610e0d9a226b1aa8999878c03ad5b7d62ff5ecc8708a4f601fe4cde626af"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.795976 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="ff81fd67675c4763c098dcc0a53f067a4ce5fbfac499868e5be530bd2f0ce8c0" exitCode=0 Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.796060 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"ff81fd67675c4763c098dcc0a53f067a4ce5fbfac499868e5be530bd2f0ce8c0"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.796089 4801 scope.go:117] "RemoveContainer" containerID="1bc0ec1db27713faa2819e59d2236a16fed1ad4e4c8174b604a5bb2c54258d36" Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.798800 4801 generic.go:334] "Generic (PLEG): container finished" podID="08c51594-17aa-4372-b10e-5dfef9eb5f85" containerID="3ba7dc92899db40a7cc915d2389b78b20d1709b758c8c3390dfd7fa90eefe962" exitCode=0 Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.798883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" event={"ID":"08c51594-17aa-4372-b10e-5dfef9eb5f85","Type":"ContainerDied","Data":"3ba7dc92899db40a7cc915d2389b78b20d1709b758c8c3390dfd7fa90eefe962"} Dec 06 03:25:41 crc kubenswrapper[4801]: I1206 03:25:41.801041 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gv5hg" event={"ID":"00cc7364-fab1-449d-9939-020c58f7e9af","Type":"ContainerStarted","Data":"609e32983f0ed0c4f4aefa3b3018171ad52a5be9bc6a4e589efc37b112d35d14"} Dec 06 03:25:41 crc kubenswrapper[4801]: E1206 03:25:41.802527 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="f3419971-0654-47d2-befb-5afb0761011c" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.164913 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.200652 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.274163 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-config\") pod \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.274815 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-config\") pod \"08c51594-17aa-4372-b10e-5dfef9eb5f85\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.275048 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-dns-svc\") pod \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.275186 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnnn\" (UniqueName: \"kubernetes.io/projected/08c51594-17aa-4372-b10e-5dfef9eb5f85-kube-api-access-bmnnn\") pod \"08c51594-17aa-4372-b10e-5dfef9eb5f85\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.275301 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-dns-svc\") pod \"08c51594-17aa-4372-b10e-5dfef9eb5f85\" (UID: \"08c51594-17aa-4372-b10e-5dfef9eb5f85\") " Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.275399 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnv5b\" (UniqueName: \"kubernetes.io/projected/ee2d2aeb-4219-432e-8164-dfd69500a1cd-kube-api-access-qnv5b\") pod \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\" (UID: \"ee2d2aeb-4219-432e-8164-dfd69500a1cd\") " Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.281511 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c51594-17aa-4372-b10e-5dfef9eb5f85-kube-api-access-bmnnn" (OuterVolumeSpecName: "kube-api-access-bmnnn") pod "08c51594-17aa-4372-b10e-5dfef9eb5f85" (UID: "08c51594-17aa-4372-b10e-5dfef9eb5f85"). InnerVolumeSpecName "kube-api-access-bmnnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.281635 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee2d2aeb-4219-432e-8164-dfd69500a1cd-kube-api-access-qnv5b" (OuterVolumeSpecName: "kube-api-access-qnv5b") pod "ee2d2aeb-4219-432e-8164-dfd69500a1cd" (UID: "ee2d2aeb-4219-432e-8164-dfd69500a1cd"). InnerVolumeSpecName "kube-api-access-qnv5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.296500 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "08c51594-17aa-4372-b10e-5dfef9eb5f85" (UID: "08c51594-17aa-4372-b10e-5dfef9eb5f85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.296826 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-config" (OuterVolumeSpecName: "config") pod "ee2d2aeb-4219-432e-8164-dfd69500a1cd" (UID: "ee2d2aeb-4219-432e-8164-dfd69500a1cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.299338 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee2d2aeb-4219-432e-8164-dfd69500a1cd" (UID: "ee2d2aeb-4219-432e-8164-dfd69500a1cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.302682 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-config" (OuterVolumeSpecName: "config") pod "08c51594-17aa-4372-b10e-5dfef9eb5f85" (UID: "08c51594-17aa-4372-b10e-5dfef9eb5f85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.378323 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnv5b\" (UniqueName: \"kubernetes.io/projected/ee2d2aeb-4219-432e-8164-dfd69500a1cd-kube-api-access-qnv5b\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.378371 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.378385 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.378397 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee2d2aeb-4219-432e-8164-dfd69500a1cd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.378410 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnnn\" (UniqueName: \"kubernetes.io/projected/08c51594-17aa-4372-b10e-5dfef9eb5f85-kube-api-access-bmnnn\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.378422 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/08c51594-17aa-4372-b10e-5dfef9eb5f85-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.814056 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" event={"ID":"26b350e2-f52b-4ebd-a380-1d7120345323","Type":"ContainerStarted","Data":"bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.814477 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.817929 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" event={"ID":"9b0adc6a-9d6a-4226-b807-79f3d905925a","Type":"ContainerStarted","Data":"6041169c19cf2bf18daf8d421196f1239abaf9ecca544873e57413d049746847"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.817962 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.819405 4801 generic.go:334] "Generic (PLEG): container finished" podID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerID="bfe1e8f8123f93d565059ce5a88a390a5c7ad059e1d2399d35f80e20e99e0a2b" exitCode=0 Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.819479 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c85c66a1-6bad-499d-8a59-75020d456cd7","Type":"ContainerDied","Data":"bfe1e8f8123f93d565059ce5a88a390a5c7ad059e1d2399d35f80e20e99e0a2b"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.822632 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"fa4e1856c226fd52059b0fd49c8e200b1d6679f042be9b39be0d4c3a479e34b9"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.824227 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" event={"ID":"08c51594-17aa-4372-b10e-5dfef9eb5f85","Type":"ContainerDied","Data":"7b6aa87d63e34ad26e8ed52cec449024ae0425bbb423799f0a02c33801d66e8a"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.824261 4801 scope.go:117] "RemoveContainer" containerID="3ba7dc92899db40a7cc915d2389b78b20d1709b758c8c3390dfd7fa90eefe962" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.824267 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-857lm" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.825977 4801 generic.go:334] "Generic (PLEG): container finished" podID="463cb826-89ba-4c9d-b4ae-9453464d3ebc" containerID="1d2ee6f45706dcf2eb6e1b0d19ca99b7e652b109d5002314cd61270db02942ff" exitCode=0 Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.826029 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"463cb826-89ba-4c9d-b4ae-9453464d3ebc","Type":"ContainerDied","Data":"1d2ee6f45706dcf2eb6e1b0d19ca99b7e652b109d5002314cd61270db02942ff"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.827901 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" event={"ID":"ee2d2aeb-4219-432e-8164-dfd69500a1cd","Type":"ContainerDied","Data":"cd760bd6876289db3fcdde1b1a7c18de30506afedc0fd1f54a638d0792d7f815"} Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.827963 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-kjr5c" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.837646 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" podStartSLOduration=17.837621511000002 podStartE2EDuration="17.837621511s" podCreationTimestamp="2025-12-06 03:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:25:42.83271368 +0000 UTC m=+1195.955321262" watchObservedRunningTime="2025-12-06 03:25:42.837621511 +0000 UTC m=+1195.960229083" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.858584 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podStartSLOduration=17.85856126 podStartE2EDuration="17.85856126s" podCreationTimestamp="2025-12-06 03:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:25:42.853968328 +0000 UTC m=+1195.976575900" watchObservedRunningTime="2025-12-06 03:25:42.85856126 +0000 UTC m=+1195.981168832" Dec 06 03:25:42 crc kubenswrapper[4801]: I1206 03:25:42.983997 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-857lm"] Dec 06 03:25:43 crc kubenswrapper[4801]: I1206 03:25:43.000463 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-857lm"] Dec 06 03:25:43 crc kubenswrapper[4801]: I1206 03:25:43.017793 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kjr5c"] Dec 06 03:25:43 crc kubenswrapper[4801]: I1206 03:25:43.026083 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-kjr5c"] Dec 06 03:25:43 crc kubenswrapper[4801]: I1206 03:25:43.226980 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c51594-17aa-4372-b10e-5dfef9eb5f85" path="/var/lib/kubelet/pods/08c51594-17aa-4372-b10e-5dfef9eb5f85/volumes" Dec 06 03:25:43 crc kubenswrapper[4801]: I1206 03:25:43.227614 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" path="/var/lib/kubelet/pods/ee2d2aeb-4219-432e-8164-dfd69500a1cd/volumes" Dec 06 03:25:47 crc kubenswrapper[4801]: I1206 03:25:47.562154 4801 scope.go:117] "RemoveContainer" containerID="09fa4f110d7360decd605ea99a54c92bda23ee2b23ab15398187f229e1e0ab64" Dec 06 03:25:47 crc kubenswrapper[4801]: I1206 03:25:47.894053 4801 generic.go:334] "Generic (PLEG): container finished" podID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerID="fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795" exitCode=0 Dec 06 03:25:47 crc kubenswrapper[4801]: I1206 03:25:47.894129 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e01c6fa-4dee-4835-a73d-30cd5af1a83f","Type":"ContainerDied","Data":"fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795"} Dec 06 03:25:48 crc kubenswrapper[4801]: I1206 03:25:48.915014 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"463cb826-89ba-4c9d-b4ae-9453464d3ebc","Type":"ContainerStarted","Data":"6a8dfbee8ad7cb12ac4471859ad32e47005caede459c3dabb84ac87f6822eced"} Dec 06 03:25:48 crc kubenswrapper[4801]: I1206 03:25:48.961374 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=44.333330319 podStartE2EDuration="1m16.961346527s" podCreationTimestamp="2025-12-06 03:24:32 +0000 UTC" firstStartedPulling="2025-12-06 03:24:34.985554178 +0000 UTC m=+1128.108161750" lastFinishedPulling="2025-12-06 03:25:07.613570366 +0000 UTC m=+1160.736177958" observedRunningTime="2025-12-06 03:25:48.949555143 +0000 UTC m=+1202.072162755" watchObservedRunningTime="2025-12-06 03:25:48.961346527 +0000 UTC m=+1202.083954129" Dec 06 03:25:50 crc kubenswrapper[4801]: I1206 03:25:50.665071 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:50 crc kubenswrapper[4801]: I1206 03:25:50.829847 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:25:50 crc kubenswrapper[4801]: I1206 03:25:50.936661 4801 generic.go:334] "Generic (PLEG): container finished" podID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerID="103e39ea3c7868e8eeec409ec28e22cd8888fb8366b8635d350b4e3f90de6af7" exitCode=0 Dec 06 03:25:50 crc kubenswrapper[4801]: I1206 03:25:50.936721 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8d84a21-b2e6-4d69-9f2b-48870e2d1702","Type":"ContainerDied","Data":"103e39ea3c7868e8eeec409ec28e22cd8888fb8366b8635d350b4e3f90de6af7"} Dec 06 03:25:50 crc kubenswrapper[4801]: I1206 03:25:50.951857 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6r2z6"] Dec 06 03:25:50 crc kubenswrapper[4801]: I1206 03:25:50.952159 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" containerName="dnsmasq-dns" containerID="cri-o://bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488" gracePeriod=10 Dec 06 03:25:51 crc kubenswrapper[4801]: E1206 03:25:51.014935 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="f4e4cd15-b8c1-4521-82f7-d54fb0141c9b" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.364211 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.476132 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-dns-svc\") pod \"26b350e2-f52b-4ebd-a380-1d7120345323\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.476323 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-config\") pod \"26b350e2-f52b-4ebd-a380-1d7120345323\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.476381 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4kl\" (UniqueName: \"kubernetes.io/projected/26b350e2-f52b-4ebd-a380-1d7120345323-kube-api-access-xf4kl\") pod \"26b350e2-f52b-4ebd-a380-1d7120345323\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.476408 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-ovsdbserver-nb\") pod \"26b350e2-f52b-4ebd-a380-1d7120345323\" (UID: \"26b350e2-f52b-4ebd-a380-1d7120345323\") " Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.480908 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b350e2-f52b-4ebd-a380-1d7120345323-kube-api-access-xf4kl" (OuterVolumeSpecName: "kube-api-access-xf4kl") pod "26b350e2-f52b-4ebd-a380-1d7120345323" (UID: "26b350e2-f52b-4ebd-a380-1d7120345323"). InnerVolumeSpecName "kube-api-access-xf4kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.517601 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "26b350e2-f52b-4ebd-a380-1d7120345323" (UID: "26b350e2-f52b-4ebd-a380-1d7120345323"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.527592 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-config" (OuterVolumeSpecName: "config") pod "26b350e2-f52b-4ebd-a380-1d7120345323" (UID: "26b350e2-f52b-4ebd-a380-1d7120345323"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.529884 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "26b350e2-f52b-4ebd-a380-1d7120345323" (UID: "26b350e2-f52b-4ebd-a380-1d7120345323"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.578665 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.578710 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf4kl\" (UniqueName: \"kubernetes.io/projected/26b350e2-f52b-4ebd-a380-1d7120345323-kube-api-access-xf4kl\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.578727 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.578742 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26b350e2-f52b-4ebd-a380-1d7120345323-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.946833 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8d84a21-b2e6-4d69-9f2b-48870e2d1702","Type":"ContainerStarted","Data":"d961461802ea59de4613b751544f36808463a49652856db4d6b72d50398ef750"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.948160 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.950987 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"dd4e7515-f487-4c9e-b405-a5f61022d5e5","Type":"ContainerStarted","Data":"68212c284ed42a88cf9117f0f998d2585cb6dc131e9b61ea170e1eda0204b960"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.953587 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b","Type":"ContainerStarted","Data":"d35605eeb7efcd9a895c2ed5e85fdf016b7783d2cd34a699b0e6af50def04510"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.956272 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e01c6fa-4dee-4835-a73d-30cd5af1a83f","Type":"ContainerStarted","Data":"a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.957112 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.959990 4801 generic.go:334] "Generic (PLEG): container finished" podID="26b350e2-f52b-4ebd-a380-1d7120345323" containerID="bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488" exitCode=0 Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.960060 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.960063 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" event={"ID":"26b350e2-f52b-4ebd-a380-1d7120345323","Type":"ContainerDied","Data":"bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.960174 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-6r2z6" event={"ID":"26b350e2-f52b-4ebd-a380-1d7120345323","Type":"ContainerDied","Data":"187ba4fec60487f12db745014fa07dc1130d812b8800835d8f0037810a0bca9f"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.960195 4801 scope.go:117] "RemoveContainer" containerID="bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488" Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.962006 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c85c66a1-6bad-499d-8a59-75020d456cd7","Type":"ContainerStarted","Data":"82f7ecf4641594d77e1ada3a68271a4872dbf9503ee6bbcf2ef1fc5b07a6808b"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.965641 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gv5hg" event={"ID":"00cc7364-fab1-449d-9939-020c58f7e9af","Type":"ContainerStarted","Data":"7308b298894a5de2af5c5e28057794a84327b25f230ca0f1053f61e3f3ef035c"} Dec 06 03:25:51 crc kubenswrapper[4801]: I1206 03:25:51.989707 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371955.865086 podStartE2EDuration="1m20.989689945s" podCreationTimestamp="2025-12-06 03:24:31 +0000 UTC" firstStartedPulling="2025-12-06 03:24:33.623974711 +0000 UTC m=+1126.746582273" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:25:51.983743236 +0000 UTC m=+1205.106350828" watchObservedRunningTime="2025-12-06 03:25:51.989689945 +0000 UTC m=+1205.112297517" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.021822 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=46.925320241 podStartE2EDuration="1m18.021803833s" podCreationTimestamp="2025-12-06 03:24:34 +0000 UTC" firstStartedPulling="2025-12-06 03:24:36.491516511 +0000 UTC m=+1129.614124083" lastFinishedPulling="2025-12-06 03:25:07.588000103 +0000 UTC m=+1160.710607675" observedRunningTime="2025-12-06 03:25:52.015465403 +0000 UTC m=+1205.138072965" watchObservedRunningTime="2025-12-06 03:25:52.021803833 +0000 UTC m=+1205.144411425" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.030448 4801 scope.go:117] "RemoveContainer" containerID="71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.043419 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gv5hg" podStartSLOduration=20.401950838 podStartE2EDuration="27.04340249s" podCreationTimestamp="2025-12-06 03:25:25 +0000 UTC" firstStartedPulling="2025-12-06 03:25:41.080413132 +0000 UTC m=+1194.203020704" lastFinishedPulling="2025-12-06 03:25:47.721864784 +0000 UTC m=+1200.844472356" observedRunningTime="2025-12-06 03:25:52.040222095 +0000 UTC m=+1205.162829667" watchObservedRunningTime="2025-12-06 03:25:52.04340249 +0000 UTC m=+1205.166010052" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.059706 4801 scope.go:117] "RemoveContainer" containerID="bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488" Dec 06 03:25:52 crc kubenswrapper[4801]: E1206 03:25:52.064887 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488\": container with ID starting with bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488 not found: ID does not exist" containerID="bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.064933 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488"} err="failed to get container status \"bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488\": rpc error: code = NotFound desc = could not find container \"bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488\": container with ID starting with bd9999f4c32eb4f80facbab42bf126980f8c080ad3ddb2375712734fc4a25488 not found: ID does not exist" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.064959 4801 scope.go:117] "RemoveContainer" containerID="71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11" Dec 06 03:25:52 crc kubenswrapper[4801]: E1206 03:25:52.065393 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11\": container with ID starting with 71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11 not found: ID does not exist" containerID="71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.065417 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11"} err="failed to get container status \"71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11\": rpc error: code = NotFound desc = could not find container \"71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11\": container with ID starting with 71e591e131e3fcd9bece4a7b2fa1a3403abbb9384ce1dec7fa23bca021810f11 not found: ID does not exist" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.104738 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=46.829598023 podStartE2EDuration="1m21.104714418s" podCreationTimestamp="2025-12-06 03:24:31 +0000 UTC" firstStartedPulling="2025-12-06 03:24:33.312026215 +0000 UTC m=+1126.434633787" lastFinishedPulling="2025-12-06 03:25:07.58714261 +0000 UTC m=+1160.709750182" observedRunningTime="2025-12-06 03:25:52.097158657 +0000 UTC m=+1205.219766239" watchObservedRunningTime="2025-12-06 03:25:52.104714418 +0000 UTC m=+1205.227322010" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.127044 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=33.254224633 podStartE2EDuration="1m12.127025295s" podCreationTimestamp="2025-12-06 03:24:40 +0000 UTC" firstStartedPulling="2025-12-06 03:25:08.851536569 +0000 UTC m=+1161.974144141" lastFinishedPulling="2025-12-06 03:25:47.724337231 +0000 UTC m=+1200.846944803" observedRunningTime="2025-12-06 03:25:52.121897817 +0000 UTC m=+1205.244505409" watchObservedRunningTime="2025-12-06 03:25:52.127025295 +0000 UTC m=+1205.249632867" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.141347 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6r2z6"] Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.155845 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-6r2z6"] Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.974883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5" event={"ID":"eefe8d7e-f739-42c8-88fb-2c27a8630e8b","Type":"ContainerStarted","Data":"e4b04c92d25a7176182af6c2c20ee414cc6536c05238685e465bc9ea1519c688"} Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.975438 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qqlb5" Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.976804 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f4e4cd15-b8c1-4521-82f7-d54fb0141c9b","Type":"ContainerStarted","Data":"61351ac1771dc317e1d1d93d16a089aafdd27c3b33905fe6185398815d1239f0"} Dec 06 03:25:52 crc kubenswrapper[4801]: I1206 03:25:52.996420 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qqlb5" podStartSLOduration=30.211270235 podStartE2EDuration="1m12.996399117s" podCreationTimestamp="2025-12-06 03:24:40 +0000 UTC" firstStartedPulling="2025-12-06 03:25:08.848364254 +0000 UTC m=+1161.970971826" lastFinishedPulling="2025-12-06 03:25:51.633493136 +0000 UTC m=+1204.756100708" observedRunningTime="2025-12-06 03:25:52.994423435 +0000 UTC m=+1206.117031017" watchObservedRunningTime="2025-12-06 03:25:52.996399117 +0000 UTC m=+1206.119006689" Dec 06 03:25:53 crc kubenswrapper[4801]: I1206 03:25:53.013460 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=26.507291792 podStartE2EDuration="1m10.013441963s" podCreationTimestamp="2025-12-06 03:24:43 +0000 UTC" firstStartedPulling="2025-12-06 03:25:08.848372584 +0000 UTC m=+1161.970980156" lastFinishedPulling="2025-12-06 03:25:52.354522755 +0000 UTC m=+1205.477130327" observedRunningTime="2025-12-06 03:25:53.011743927 +0000 UTC m=+1206.134351509" watchObservedRunningTime="2025-12-06 03:25:53.013441963 +0000 UTC m=+1206.136049535" Dec 06 03:25:53 crc kubenswrapper[4801]: I1206 03:25:53.223850 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" path="/var/lib/kubelet/pods/26b350e2-f52b-4ebd-a380-1d7120345323/volumes" Dec 06 03:25:53 crc kubenswrapper[4801]: I1206 03:25:53.792962 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 06 03:25:53 crc kubenswrapper[4801]: I1206 03:25:53.837441 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 06 03:25:53 crc kubenswrapper[4801]: I1206 03:25:53.986415 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 06 03:25:54 crc kubenswrapper[4801]: I1206 03:25:54.028490 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 06 03:25:54 crc kubenswrapper[4801]: I1206 03:25:54.272152 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 06 03:25:54 crc kubenswrapper[4801]: I1206 03:25:54.273206 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 06 03:25:54 crc kubenswrapper[4801]: I1206 03:25:54.273237 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 06 03:25:54 crc kubenswrapper[4801]: I1206 03:25:54.344584 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 06 03:25:55 crc kubenswrapper[4801]: I1206 03:25:55.056130 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 06 03:25:55 crc kubenswrapper[4801]: I1206 03:25:55.272416 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 06 03:25:55 crc kubenswrapper[4801]: I1206 03:25:55.900852 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 06 03:25:55 crc kubenswrapper[4801]: I1206 03:25:55.901057 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.325739 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.379052 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549027 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 06 03:25:57 crc kubenswrapper[4801]: E1206 03:25:57.549318 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549330 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: E1206 03:25:57.549339 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c51594-17aa-4372-b10e-5dfef9eb5f85" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549345 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c51594-17aa-4372-b10e-5dfef9eb5f85" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: E1206 03:25:57.549357 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549365 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: E1206 03:25:57.549388 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" containerName="dnsmasq-dns" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549396 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" containerName="dnsmasq-dns" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549569 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c51594-17aa-4372-b10e-5dfef9eb5f85" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549589 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b350e2-f52b-4ebd-a380-1d7120345323" containerName="dnsmasq-dns" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.549600 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee2d2aeb-4219-432e-8164-dfd69500a1cd" containerName="init" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.550357 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.552341 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.552609 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dtdpn" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.553082 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.553215 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.564929 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.686797 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.687183 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-config\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.687304 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.687338 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-scripts\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.687425 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq8bx\" (UniqueName: \"kubernetes.io/projected/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-kube-api-access-hq8bx\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.687459 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.687520 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.788503 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.788781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.788887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.788970 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-config\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.789102 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.789196 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-scripts\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.789251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.789396 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq8bx\" (UniqueName: \"kubernetes.io/projected/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-kube-api-access-hq8bx\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.791793 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-scripts\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.791873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-config\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.800484 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.800708 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.800731 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.840567 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq8bx\" (UniqueName: \"kubernetes.io/projected/62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b-kube-api-access-hq8bx\") pod \"ovn-northd-0\" (UID: \"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b\") " pod="openstack/ovn-northd-0" Dec 06 03:25:57 crc kubenswrapper[4801]: I1206 03:25:57.868819 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 03:25:58 crc kubenswrapper[4801]: I1206 03:25:58.026824 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3419971-0654-47d2-befb-5afb0761011c","Type":"ContainerStarted","Data":"dcfe6030a41d831e174e17880ed423d88393f57a70a8f6e8f34e6a2cbe5ff58b"} Dec 06 03:25:58 crc kubenswrapper[4801]: I1206 03:25:58.027907 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 03:25:58 crc kubenswrapper[4801]: I1206 03:25:58.032981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-44f28" event={"ID":"276fe396-a90f-4c5b-83ce-ac17c7617e63","Type":"ContainerStarted","Data":"3d87c30d72eb0e4c1d671eae3cd5bd2408c35767c7c1578891562556701b0d66"} Dec 06 03:25:58 crc kubenswrapper[4801]: I1206 03:25:58.054450 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=23.348605893 podStartE2EDuration="1m21.054426225s" podCreationTimestamp="2025-12-06 03:24:37 +0000 UTC" firstStartedPulling="2025-12-06 03:24:59.665786616 +0000 UTC m=+1152.788394188" lastFinishedPulling="2025-12-06 03:25:57.371606948 +0000 UTC m=+1210.494214520" observedRunningTime="2025-12-06 03:25:58.044280583 +0000 UTC m=+1211.166888155" watchObservedRunningTime="2025-12-06 03:25:58.054426225 +0000 UTC m=+1211.177033797" Dec 06 03:25:58 crc kubenswrapper[4801]: I1206 03:25:58.379344 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 03:25:58 crc kubenswrapper[4801]: W1206 03:25:58.389883 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f4ed72_b541_4bcb_9f27_ec5b7ac71a9b.slice/crio-5cb01286205a5dd08d13922ce971ade8354b5b7e84b11937999f9e36ac60ff6c WatchSource:0}: Error finding container 5cb01286205a5dd08d13922ce971ade8354b5b7e84b11937999f9e36ac60ff6c: Status 404 returned error can't find the container with id 5cb01286205a5dd08d13922ce971ade8354b5b7e84b11937999f9e36ac60ff6c Dec 06 03:25:59 crc kubenswrapper[4801]: I1206 03:25:59.042068 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b","Type":"ContainerStarted","Data":"5cb01286205a5dd08d13922ce971ade8354b5b7e84b11937999f9e36ac60ff6c"} Dec 06 03:25:59 crc kubenswrapper[4801]: I1206 03:25:59.045114 4801 generic.go:334] "Generic (PLEG): container finished" podID="276fe396-a90f-4c5b-83ce-ac17c7617e63" containerID="3d87c30d72eb0e4c1d671eae3cd5bd2408c35767c7c1578891562556701b0d66" exitCode=0 Dec 06 03:25:59 crc kubenswrapper[4801]: I1206 03:25:59.045285 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-44f28" event={"ID":"276fe396-a90f-4c5b-83ce-ac17c7617e63","Type":"ContainerDied","Data":"3d87c30d72eb0e4c1d671eae3cd5bd2408c35767c7c1578891562556701b0d66"} Dec 06 03:26:02 crc kubenswrapper[4801]: I1206 03:26:02.698188 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Dec 06 03:26:03 crc kubenswrapper[4801]: I1206 03:26:03.064300 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 06 03:26:03 crc kubenswrapper[4801]: I1206 03:26:03.090166 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-44f28" event={"ID":"276fe396-a90f-4c5b-83ce-ac17c7617e63","Type":"ContainerStarted","Data":"3405fa70acc41dbf711fb4eabb2dee2c8c362f7034f6fff89d90a280946cba5d"} Dec 06 03:26:03 crc kubenswrapper[4801]: I1206 03:26:03.621575 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 06 03:26:03 crc kubenswrapper[4801]: I1206 03:26:03.704873 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output=< Dec 06 03:26:03 crc kubenswrapper[4801]: wsrep_local_state_comment (Joined) differs from Synced Dec 06 03:26:03 crc kubenswrapper[4801]: > Dec 06 03:26:04 crc kubenswrapper[4801]: I1206 03:26:04.105252 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-44f28" event={"ID":"276fe396-a90f-4c5b-83ce-ac17c7617e63","Type":"ContainerStarted","Data":"8ceee5e8f6bae8e5e3ad68df3908ef5ace02ddecf3269bf2ff6d73fcb1378917"} Dec 06 03:26:04 crc kubenswrapper[4801]: I1206 03:26:04.105634 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:26:04 crc kubenswrapper[4801]: I1206 03:26:04.105744 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:26:04 crc kubenswrapper[4801]: I1206 03:26:04.142395 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-44f28" podStartSLOduration=40.216894753 podStartE2EDuration="1m23.142359485s" podCreationTimestamp="2025-12-06 03:24:41 +0000 UTC" firstStartedPulling="2025-12-06 03:25:14.041774359 +0000 UTC m=+1167.164381961" lastFinishedPulling="2025-12-06 03:25:56.967239131 +0000 UTC m=+1210.089846693" observedRunningTime="2025-12-06 03:26:04.13544562 +0000 UTC m=+1217.258053202" watchObservedRunningTime="2025-12-06 03:26:04.142359485 +0000 UTC m=+1217.264967097" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.558613 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-89sv7"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.560976 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.567430 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-089f-account-create-update-zx4mt"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.568642 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.571249 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.576505 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-89sv7"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.613109 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-089f-account-create-update-zx4mt"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.668909 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9m2r\" (UniqueName: \"kubernetes.io/projected/d7c4a0a8-727e-4e11-891e-c635069d7a91-kube-api-access-d9m2r\") pod \"keystone-089f-account-create-update-zx4mt\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.669000 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77f56097-f68c-48fe-b455-33f6119871c5-operator-scripts\") pod \"keystone-db-create-89sv7\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.670216 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7c4a0a8-727e-4e11-891e-c635069d7a91-operator-scripts\") pod \"keystone-089f-account-create-update-zx4mt\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.670397 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rm52\" (UniqueName: \"kubernetes.io/projected/77f56097-f68c-48fe-b455-33f6119871c5-kube-api-access-2rm52\") pod \"keystone-db-create-89sv7\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.771605 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7c4a0a8-727e-4e11-891e-c635069d7a91-operator-scripts\") pod \"keystone-089f-account-create-update-zx4mt\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.771657 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rm52\" (UniqueName: \"kubernetes.io/projected/77f56097-f68c-48fe-b455-33f6119871c5-kube-api-access-2rm52\") pod \"keystone-db-create-89sv7\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.771711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9m2r\" (UniqueName: \"kubernetes.io/projected/d7c4a0a8-727e-4e11-891e-c635069d7a91-kube-api-access-d9m2r\") pod \"keystone-089f-account-create-update-zx4mt\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.771730 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77f56097-f68c-48fe-b455-33f6119871c5-operator-scripts\") pod \"keystone-db-create-89sv7\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.772490 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77f56097-f68c-48fe-b455-33f6119871c5-operator-scripts\") pod \"keystone-db-create-89sv7\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.772519 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7c4a0a8-727e-4e11-891e-c635069d7a91-operator-scripts\") pod \"keystone-089f-account-create-update-zx4mt\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.807135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rm52\" (UniqueName: \"kubernetes.io/projected/77f56097-f68c-48fe-b455-33f6119871c5-kube-api-access-2rm52\") pod \"keystone-db-create-89sv7\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.809438 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9m2r\" (UniqueName: \"kubernetes.io/projected/d7c4a0a8-727e-4e11-891e-c635069d7a91-kube-api-access-d9m2r\") pod \"keystone-089f-account-create-update-zx4mt\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.850938 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-ppn95"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.852986 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ppn95" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.863107 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ppn95"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.916453 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.923273 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.944543 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f1a-account-create-update-tgmnk"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.945681 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.949162 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.949774 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f1a-account-create-update-tgmnk"] Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.975518 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77czx\" (UniqueName: \"kubernetes.io/projected/587add7b-0716-4ce8-b807-4fb5415f85aa-kube-api-access-77czx\") pod \"placement-db-create-ppn95\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " pod="openstack/placement-db-create-ppn95" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.975573 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/587add7b-0716-4ce8-b807-4fb5415f85aa-operator-scripts\") pod \"placement-db-create-ppn95\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " pod="openstack/placement-db-create-ppn95" Dec 06 03:26:05 crc kubenswrapper[4801]: I1206 03:26:05.994855 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.076869 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77czx\" (UniqueName: \"kubernetes.io/projected/587add7b-0716-4ce8-b807-4fb5415f85aa-kube-api-access-77czx\") pod \"placement-db-create-ppn95\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " pod="openstack/placement-db-create-ppn95" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.077630 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f618d289-2120-4097-80bd-0cff83800ff8-operator-scripts\") pod \"placement-6f1a-account-create-update-tgmnk\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.077664 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/587add7b-0716-4ce8-b807-4fb5415f85aa-operator-scripts\") pod \"placement-db-create-ppn95\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " pod="openstack/placement-db-create-ppn95" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.077845 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2qgq\" (UniqueName: \"kubernetes.io/projected/f618d289-2120-4097-80bd-0cff83800ff8-kube-api-access-z2qgq\") pod \"placement-6f1a-account-create-update-tgmnk\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.079129 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/587add7b-0716-4ce8-b807-4fb5415f85aa-operator-scripts\") pod \"placement-db-create-ppn95\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " pod="openstack/placement-db-create-ppn95" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.103910 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77czx\" (UniqueName: \"kubernetes.io/projected/587add7b-0716-4ce8-b807-4fb5415f85aa-kube-api-access-77czx\") pod \"placement-db-create-ppn95\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " pod="openstack/placement-db-create-ppn95" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.179589 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f618d289-2120-4097-80bd-0cff83800ff8-operator-scripts\") pod \"placement-6f1a-account-create-update-tgmnk\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.179673 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2qgq\" (UniqueName: \"kubernetes.io/projected/f618d289-2120-4097-80bd-0cff83800ff8-kube-api-access-z2qgq\") pod \"placement-6f1a-account-create-update-tgmnk\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.180516 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f618d289-2120-4097-80bd-0cff83800ff8-operator-scripts\") pod \"placement-6f1a-account-create-update-tgmnk\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.186721 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ppn95" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.197276 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2qgq\" (UniqueName: \"kubernetes.io/projected/f618d289-2120-4097-80bd-0cff83800ff8-kube-api-access-z2qgq\") pod \"placement-6f1a-account-create-update-tgmnk\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.267551 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.450386 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-27wtd"] Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.451855 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.462213 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-27wtd"] Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.585985 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-89sv7"] Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.589282 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw6kc\" (UniqueName: \"kubernetes.io/projected/b605f602-124c-4b96-854a-ff8e9583ec6b-kube-api-access-tw6kc\") pod \"glance-db-create-27wtd\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.589943 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b605f602-124c-4b96-854a-ff8e9583ec6b-operator-scripts\") pod \"glance-db-create-27wtd\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.594920 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-147f-account-create-update-dg2m9"] Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.595878 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.603876 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 06 03:26:06 crc kubenswrapper[4801]: W1206 03:26:06.604979 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77f56097_f68c_48fe_b455_33f6119871c5.slice/crio-780f72aea0e29faea6fd637f9ac230a6c8a395bb7901d8c3718f40d74bd1e0ad WatchSource:0}: Error finding container 780f72aea0e29faea6fd637f9ac230a6c8a395bb7901d8c3718f40d74bd1e0ad: Status 404 returned error can't find the container with id 780f72aea0e29faea6fd637f9ac230a6c8a395bb7901d8c3718f40d74bd1e0ad Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.605683 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-147f-account-create-update-dg2m9"] Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.632783 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-089f-account-create-update-zx4mt"] Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.694292 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-operator-scripts\") pod \"glance-147f-account-create-update-dg2m9\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.694526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b605f602-124c-4b96-854a-ff8e9583ec6b-operator-scripts\") pod \"glance-db-create-27wtd\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.694623 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw6kc\" (UniqueName: \"kubernetes.io/projected/b605f602-124c-4b96-854a-ff8e9583ec6b-kube-api-access-tw6kc\") pod \"glance-db-create-27wtd\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.694717 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrfn\" (UniqueName: \"kubernetes.io/projected/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-kube-api-access-4hrfn\") pod \"glance-147f-account-create-update-dg2m9\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.695530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b605f602-124c-4b96-854a-ff8e9583ec6b-operator-scripts\") pod \"glance-db-create-27wtd\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.734832 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw6kc\" (UniqueName: \"kubernetes.io/projected/b605f602-124c-4b96-854a-ff8e9583ec6b-kube-api-access-tw6kc\") pod \"glance-db-create-27wtd\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.771477 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27wtd" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.796059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hrfn\" (UniqueName: \"kubernetes.io/projected/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-kube-api-access-4hrfn\") pod \"glance-147f-account-create-update-dg2m9\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.796118 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-operator-scripts\") pod \"glance-147f-account-create-update-dg2m9\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.799515 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-operator-scripts\") pod \"glance-147f-account-create-update-dg2m9\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.814998 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hrfn\" (UniqueName: \"kubernetes.io/projected/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-kube-api-access-4hrfn\") pod \"glance-147f-account-create-update-dg2m9\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:06 crc kubenswrapper[4801]: I1206 03:26:06.933967 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.015804 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-ppn95"] Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.145575 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-089f-account-create-update-zx4mt" event={"ID":"d7c4a0a8-727e-4e11-891e-c635069d7a91","Type":"ContainerStarted","Data":"92cb5dafcc6835d1627e7a054150d2f7e6baaf776e7bba5a88df4f0b1bf6241e"} Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.148371 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89sv7" event={"ID":"77f56097-f68c-48fe-b455-33f6119871c5","Type":"ContainerStarted","Data":"780f72aea0e29faea6fd637f9ac230a6c8a395bb7901d8c3718f40d74bd1e0ad"} Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.149841 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ppn95" event={"ID":"587add7b-0716-4ce8-b807-4fb5415f85aa","Type":"ContainerStarted","Data":"7bd0a9f8ecafd18ea7ac817d052b57ff20dca4b8dcb10febe1eb2dabbbeb0f1a"} Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.169498 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f1a-account-create-update-tgmnk"] Dec 06 03:26:07 crc kubenswrapper[4801]: W1206 03:26:07.180376 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf618d289_2120_4097_80bd_0cff83800ff8.slice/crio-98d153177831d0d89b72adc302fdf8f9c5dbb139d2578fc0e6690d0bea06f7af WatchSource:0}: Error finding container 98d153177831d0d89b72adc302fdf8f9c5dbb139d2578fc0e6690d0bea06f7af: Status 404 returned error can't find the container with id 98d153177831d0d89b72adc302fdf8f9c5dbb139d2578fc0e6690d0bea06f7af Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.327409 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-27wtd"] Dec 06 03:26:07 crc kubenswrapper[4801]: W1206 03:26:07.333308 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb605f602_124c_4b96_854a_ff8e9583ec6b.slice/crio-8c648413fbc9ed875c864742fdb347b3de1761298b833d1dd378d5a815a4bbc7 WatchSource:0}: Error finding container 8c648413fbc9ed875c864742fdb347b3de1761298b833d1dd378d5a815a4bbc7: Status 404 returned error can't find the container with id 8c648413fbc9ed875c864742fdb347b3de1761298b833d1dd378d5a815a4bbc7 Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.436577 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-147f-account-create-update-dg2m9"] Dec 06 03:26:07 crc kubenswrapper[4801]: W1206 03:26:07.443575 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21b1d8fa_2a96_47ac_aaef_bf2a09f00373.slice/crio-e282738a53e480bca94151aaade5ca102b077895fade9d051a45927bb2b6bf2d WatchSource:0}: Error finding container e282738a53e480bca94151aaade5ca102b077895fade9d051a45927bb2b6bf2d: Status 404 returned error can't find the container with id e282738a53e480bca94151aaade5ca102b077895fade9d051a45927bb2b6bf2d Dec 06 03:26:07 crc kubenswrapper[4801]: I1206 03:26:07.507895 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.159374 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b","Type":"ContainerStarted","Data":"8c163caceb96017e0c23568c49bb4f55cb26bd8d930a5ee84480148c69d05987"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.160603 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f1a-account-create-update-tgmnk" event={"ID":"f618d289-2120-4097-80bd-0cff83800ff8","Type":"ContainerStarted","Data":"16393c50fdfe02859d7012c4aed6a48eff1d9317d41c8e0273642e5abd8b1b0d"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.160675 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f1a-account-create-update-tgmnk" event={"ID":"f618d289-2120-4097-80bd-0cff83800ff8","Type":"ContainerStarted","Data":"98d153177831d0d89b72adc302fdf8f9c5dbb139d2578fc0e6690d0bea06f7af"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.161922 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ppn95" event={"ID":"587add7b-0716-4ce8-b807-4fb5415f85aa","Type":"ContainerStarted","Data":"c31fb23b0fd48026123060beeaf8ab73e82f006b0848957a5aa591b71dc2ac5c"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.163078 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-089f-account-create-update-zx4mt" event={"ID":"d7c4a0a8-727e-4e11-891e-c635069d7a91","Type":"ContainerStarted","Data":"a0ad87272fb18a313441f48899db111047b5274e47b4de376583c4747e33243a"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.164218 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89sv7" event={"ID":"77f56097-f68c-48fe-b455-33f6119871c5","Type":"ContainerStarted","Data":"ceedeaecf815a5549737afe75eb0009ff51352b60875da6468e5019752de45ac"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.165307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-147f-account-create-update-dg2m9" event={"ID":"21b1d8fa-2a96-47ac-aaef-bf2a09f00373","Type":"ContainerStarted","Data":"cd8814ad447eb8c1a7a022a4ee26232df52ebd76e1104b9ed8fd8c23e409a730"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.165342 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-147f-account-create-update-dg2m9" event={"ID":"21b1d8fa-2a96-47ac-aaef-bf2a09f00373","Type":"ContainerStarted","Data":"e282738a53e480bca94151aaade5ca102b077895fade9d051a45927bb2b6bf2d"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.166485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27wtd" event={"ID":"b605f602-124c-4b96-854a-ff8e9583ec6b","Type":"ContainerStarted","Data":"cff433d1c1ea532d9d1e130aa5032185a6ffd75b29985a1ce97bef06140db53e"} Dec 06 03:26:08 crc kubenswrapper[4801]: I1206 03:26:08.166539 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27wtd" event={"ID":"b605f602-124c-4b96-854a-ff8e9583ec6b","Type":"ContainerStarted","Data":"8c648413fbc9ed875c864742fdb347b3de1761298b833d1dd378d5a815a4bbc7"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.187658 4801 generic.go:334] "Generic (PLEG): container finished" podID="77f56097-f68c-48fe-b455-33f6119871c5" containerID="ceedeaecf815a5549737afe75eb0009ff51352b60875da6468e5019752de45ac" exitCode=0 Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.188188 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89sv7" event={"ID":"77f56097-f68c-48fe-b455-33f6119871c5","Type":"ContainerDied","Data":"ceedeaecf815a5549737afe75eb0009ff51352b60875da6468e5019752de45ac"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.190381 4801 generic.go:334] "Generic (PLEG): container finished" podID="21b1d8fa-2a96-47ac-aaef-bf2a09f00373" containerID="cd8814ad447eb8c1a7a022a4ee26232df52ebd76e1104b9ed8fd8c23e409a730" exitCode=0 Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.190593 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-147f-account-create-update-dg2m9" event={"ID":"21b1d8fa-2a96-47ac-aaef-bf2a09f00373","Type":"ContainerDied","Data":"cd8814ad447eb8c1a7a022a4ee26232df52ebd76e1104b9ed8fd8c23e409a730"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.193345 4801 generic.go:334] "Generic (PLEG): container finished" podID="b605f602-124c-4b96-854a-ff8e9583ec6b" containerID="cff433d1c1ea532d9d1e130aa5032185a6ffd75b29985a1ce97bef06140db53e" exitCode=0 Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.193607 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27wtd" event={"ID":"b605f602-124c-4b96-854a-ff8e9583ec6b","Type":"ContainerDied","Data":"cff433d1c1ea532d9d1e130aa5032185a6ffd75b29985a1ce97bef06140db53e"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.199253 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b","Type":"ContainerStarted","Data":"0adcf47e0b718513f65c037cb97d6722d43b774e4bbca72c114c13009cc94f46"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.199744 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.201625 4801 generic.go:334] "Generic (PLEG): container finished" podID="f618d289-2120-4097-80bd-0cff83800ff8" containerID="16393c50fdfe02859d7012c4aed6a48eff1d9317d41c8e0273642e5abd8b1b0d" exitCode=0 Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.201689 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f1a-account-create-update-tgmnk" event={"ID":"f618d289-2120-4097-80bd-0cff83800ff8","Type":"ContainerDied","Data":"16393c50fdfe02859d7012c4aed6a48eff1d9317d41c8e0273642e5abd8b1b0d"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.202919 4801 generic.go:334] "Generic (PLEG): container finished" podID="587add7b-0716-4ce8-b807-4fb5415f85aa" containerID="c31fb23b0fd48026123060beeaf8ab73e82f006b0848957a5aa591b71dc2ac5c" exitCode=0 Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.202981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ppn95" event={"ID":"587add7b-0716-4ce8-b807-4fb5415f85aa","Type":"ContainerDied","Data":"c31fb23b0fd48026123060beeaf8ab73e82f006b0848957a5aa591b71dc2ac5c"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.204033 4801 generic.go:334] "Generic (PLEG): container finished" podID="d7c4a0a8-727e-4e11-891e-c635069d7a91" containerID="a0ad87272fb18a313441f48899db111047b5274e47b4de376583c4747e33243a" exitCode=0 Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.204089 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-089f-account-create-update-zx4mt" event={"ID":"d7c4a0a8-727e-4e11-891e-c635069d7a91","Type":"ContainerDied","Data":"a0ad87272fb18a313441f48899db111047b5274e47b4de376583c4747e33243a"} Dec 06 03:26:10 crc kubenswrapper[4801]: I1206 03:26:10.233435 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.230224765 podStartE2EDuration="13.233409747s" podCreationTimestamp="2025-12-06 03:25:57 +0000 UTC" firstStartedPulling="2025-12-06 03:25:58.392663473 +0000 UTC m=+1211.515271045" lastFinishedPulling="2025-12-06 03:26:06.395848465 +0000 UTC m=+1219.518456027" observedRunningTime="2025-12-06 03:26:10.226889813 +0000 UTC m=+1223.349497385" watchObservedRunningTime="2025-12-06 03:26:10.233409747 +0000 UTC m=+1223.356017319" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.644401 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.793363 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f618d289-2120-4097-80bd-0cff83800ff8-operator-scripts\") pod \"f618d289-2120-4097-80bd-0cff83800ff8\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.793872 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2qgq\" (UniqueName: \"kubernetes.io/projected/f618d289-2120-4097-80bd-0cff83800ff8-kube-api-access-z2qgq\") pod \"f618d289-2120-4097-80bd-0cff83800ff8\" (UID: \"f618d289-2120-4097-80bd-0cff83800ff8\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.794466 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f618d289-2120-4097-80bd-0cff83800ff8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f618d289-2120-4097-80bd-0cff83800ff8" (UID: "f618d289-2120-4097-80bd-0cff83800ff8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.799524 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f618d289-2120-4097-80bd-0cff83800ff8-kube-api-access-z2qgq" (OuterVolumeSpecName: "kube-api-access-z2qgq") pod "f618d289-2120-4097-80bd-0cff83800ff8" (UID: "f618d289-2120-4097-80bd-0cff83800ff8"). InnerVolumeSpecName "kube-api-access-z2qgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.838671 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27wtd" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.848422 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ppn95" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.863941 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.874505 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.883610 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.898189 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f618d289-2120-4097-80bd-0cff83800ff8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.898231 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2qgq\" (UniqueName: \"kubernetes.io/projected/f618d289-2120-4097-80bd-0cff83800ff8-kube-api-access-z2qgq\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.998975 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77czx\" (UniqueName: \"kubernetes.io/projected/587add7b-0716-4ce8-b807-4fb5415f85aa-kube-api-access-77czx\") pod \"587add7b-0716-4ce8-b807-4fb5415f85aa\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999082 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw6kc\" (UniqueName: \"kubernetes.io/projected/b605f602-124c-4b96-854a-ff8e9583ec6b-kube-api-access-tw6kc\") pod \"b605f602-124c-4b96-854a-ff8e9583ec6b\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999127 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9m2r\" (UniqueName: \"kubernetes.io/projected/d7c4a0a8-727e-4e11-891e-c635069d7a91-kube-api-access-d9m2r\") pod \"d7c4a0a8-727e-4e11-891e-c635069d7a91\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999168 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77f56097-f68c-48fe-b455-33f6119871c5-operator-scripts\") pod \"77f56097-f68c-48fe-b455-33f6119871c5\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999205 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hrfn\" (UniqueName: \"kubernetes.io/projected/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-kube-api-access-4hrfn\") pod \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999253 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/587add7b-0716-4ce8-b807-4fb5415f85aa-operator-scripts\") pod \"587add7b-0716-4ce8-b807-4fb5415f85aa\" (UID: \"587add7b-0716-4ce8-b807-4fb5415f85aa\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999307 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b605f602-124c-4b96-854a-ff8e9583ec6b-operator-scripts\") pod \"b605f602-124c-4b96-854a-ff8e9583ec6b\" (UID: \"b605f602-124c-4b96-854a-ff8e9583ec6b\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999359 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rm52\" (UniqueName: \"kubernetes.io/projected/77f56097-f68c-48fe-b455-33f6119871c5-kube-api-access-2rm52\") pod \"77f56097-f68c-48fe-b455-33f6119871c5\" (UID: \"77f56097-f68c-48fe-b455-33f6119871c5\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999402 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7c4a0a8-727e-4e11-891e-c635069d7a91-operator-scripts\") pod \"d7c4a0a8-727e-4e11-891e-c635069d7a91\" (UID: \"d7c4a0a8-727e-4e11-891e-c635069d7a91\") " Dec 06 03:26:11 crc kubenswrapper[4801]: I1206 03:26:11.999457 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-operator-scripts\") pod \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\" (UID: \"21b1d8fa-2a96-47ac-aaef-bf2a09f00373\") " Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.000605 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7c4a0a8-727e-4e11-891e-c635069d7a91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7c4a0a8-727e-4e11-891e-c635069d7a91" (UID: "d7c4a0a8-727e-4e11-891e-c635069d7a91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.000643 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/587add7b-0716-4ce8-b807-4fb5415f85aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "587add7b-0716-4ce8-b807-4fb5415f85aa" (UID: "587add7b-0716-4ce8-b807-4fb5415f85aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.000794 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f56097-f68c-48fe-b455-33f6119871c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77f56097-f68c-48fe-b455-33f6119871c5" (UID: "77f56097-f68c-48fe-b455-33f6119871c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.000848 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21b1d8fa-2a96-47ac-aaef-bf2a09f00373" (UID: "21b1d8fa-2a96-47ac-aaef-bf2a09f00373"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.001705 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b605f602-124c-4b96-854a-ff8e9583ec6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b605f602-124c-4b96-854a-ff8e9583ec6b" (UID: "b605f602-124c-4b96-854a-ff8e9583ec6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.003589 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-kube-api-access-4hrfn" (OuterVolumeSpecName: "kube-api-access-4hrfn") pod "21b1d8fa-2a96-47ac-aaef-bf2a09f00373" (UID: "21b1d8fa-2a96-47ac-aaef-bf2a09f00373"). InnerVolumeSpecName "kube-api-access-4hrfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.003679 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f56097-f68c-48fe-b455-33f6119871c5-kube-api-access-2rm52" (OuterVolumeSpecName: "kube-api-access-2rm52") pod "77f56097-f68c-48fe-b455-33f6119871c5" (UID: "77f56097-f68c-48fe-b455-33f6119871c5"). InnerVolumeSpecName "kube-api-access-2rm52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.003999 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f602-124c-4b96-854a-ff8e9583ec6b-kube-api-access-tw6kc" (OuterVolumeSpecName: "kube-api-access-tw6kc") pod "b605f602-124c-4b96-854a-ff8e9583ec6b" (UID: "b605f602-124c-4b96-854a-ff8e9583ec6b"). InnerVolumeSpecName "kube-api-access-tw6kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.005677 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587add7b-0716-4ce8-b807-4fb5415f85aa-kube-api-access-77czx" (OuterVolumeSpecName: "kube-api-access-77czx") pod "587add7b-0716-4ce8-b807-4fb5415f85aa" (UID: "587add7b-0716-4ce8-b807-4fb5415f85aa"). InnerVolumeSpecName "kube-api-access-77czx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.006287 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c4a0a8-727e-4e11-891e-c635069d7a91-kube-api-access-d9m2r" (OuterVolumeSpecName: "kube-api-access-d9m2r") pod "d7c4a0a8-727e-4e11-891e-c635069d7a91" (UID: "d7c4a0a8-727e-4e11-891e-c635069d7a91"). InnerVolumeSpecName "kube-api-access-d9m2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101514 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77czx\" (UniqueName: \"kubernetes.io/projected/587add7b-0716-4ce8-b807-4fb5415f85aa-kube-api-access-77czx\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101557 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw6kc\" (UniqueName: \"kubernetes.io/projected/b605f602-124c-4b96-854a-ff8e9583ec6b-kube-api-access-tw6kc\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101566 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9m2r\" (UniqueName: \"kubernetes.io/projected/d7c4a0a8-727e-4e11-891e-c635069d7a91-kube-api-access-d9m2r\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101576 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77f56097-f68c-48fe-b455-33f6119871c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101585 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hrfn\" (UniqueName: \"kubernetes.io/projected/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-kube-api-access-4hrfn\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101593 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/587add7b-0716-4ce8-b807-4fb5415f85aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101607 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b605f602-124c-4b96-854a-ff8e9583ec6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101617 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rm52\" (UniqueName: \"kubernetes.io/projected/77f56097-f68c-48fe-b455-33f6119871c5-kube-api-access-2rm52\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101625 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7c4a0a8-727e-4e11-891e-c635069d7a91-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.101633 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21b1d8fa-2a96-47ac-aaef-bf2a09f00373-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.223819 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-147f-account-create-update-dg2m9" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.223927 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-147f-account-create-update-dg2m9" event={"ID":"21b1d8fa-2a96-47ac-aaef-bf2a09f00373","Type":"ContainerDied","Data":"e282738a53e480bca94151aaade5ca102b077895fade9d051a45927bb2b6bf2d"} Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.224023 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e282738a53e480bca94151aaade5ca102b077895fade9d051a45927bb2b6bf2d" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.227675 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-27wtd" event={"ID":"b605f602-124c-4b96-854a-ff8e9583ec6b","Type":"ContainerDied","Data":"8c648413fbc9ed875c864742fdb347b3de1761298b833d1dd378d5a815a4bbc7"} Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.227720 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c648413fbc9ed875c864742fdb347b3de1761298b833d1dd378d5a815a4bbc7" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.227792 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-27wtd" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.236032 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f1a-account-create-update-tgmnk" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.236079 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f1a-account-create-update-tgmnk" event={"ID":"f618d289-2120-4097-80bd-0cff83800ff8","Type":"ContainerDied","Data":"98d153177831d0d89b72adc302fdf8f9c5dbb139d2578fc0e6690d0bea06f7af"} Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.236115 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d153177831d0d89b72adc302fdf8f9c5dbb139d2578fc0e6690d0bea06f7af" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.238205 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-ppn95" event={"ID":"587add7b-0716-4ce8-b807-4fb5415f85aa","Type":"ContainerDied","Data":"7bd0a9f8ecafd18ea7ac817d052b57ff20dca4b8dcb10febe1eb2dabbbeb0f1a"} Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.238237 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bd0a9f8ecafd18ea7ac817d052b57ff20dca4b8dcb10febe1eb2dabbbeb0f1a" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.238305 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-ppn95" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.239728 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-089f-account-create-update-zx4mt" event={"ID":"d7c4a0a8-727e-4e11-891e-c635069d7a91","Type":"ContainerDied","Data":"92cb5dafcc6835d1627e7a054150d2f7e6baaf776e7bba5a88df4f0b1bf6241e"} Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.239830 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92cb5dafcc6835d1627e7a054150d2f7e6baaf776e7bba5a88df4f0b1bf6241e" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.239887 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-089f-account-create-update-zx4mt" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.242372 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-89sv7" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.242368 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-89sv7" event={"ID":"77f56097-f68c-48fe-b455-33f6119871c5","Type":"ContainerDied","Data":"780f72aea0e29faea6fd637f9ac230a6c8a395bb7901d8c3718f40d74bd1e0ad"} Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.242423 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780f72aea0e29faea6fd637f9ac230a6c8a395bb7901d8c3718f40d74bd1e0ad" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.696831 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.998982 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-m6dhf"] Dec 06 03:26:12 crc kubenswrapper[4801]: E1206 03:26:12.999297 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c4a0a8-727e-4e11-891e-c635069d7a91" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999316 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c4a0a8-727e-4e11-891e-c635069d7a91" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: E1206 03:26:12.999330 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="587add7b-0716-4ce8-b807-4fb5415f85aa" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999337 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="587add7b-0716-4ce8-b807-4fb5415f85aa" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: E1206 03:26:12.999346 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f56097-f68c-48fe-b455-33f6119871c5" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999352 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f56097-f68c-48fe-b455-33f6119871c5" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: E1206 03:26:12.999361 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b605f602-124c-4b96-854a-ff8e9583ec6b" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999367 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b605f602-124c-4b96-854a-ff8e9583ec6b" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: E1206 03:26:12.999380 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b1d8fa-2a96-47ac-aaef-bf2a09f00373" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999385 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b1d8fa-2a96-47ac-aaef-bf2a09f00373" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: E1206 03:26:12.999402 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f618d289-2120-4097-80bd-0cff83800ff8" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999408 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f618d289-2120-4097-80bd-0cff83800ff8" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999548 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="587add7b-0716-4ce8-b807-4fb5415f85aa" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999561 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b1d8fa-2a96-47ac-aaef-bf2a09f00373" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999570 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b605f602-124c-4b96-854a-ff8e9583ec6b" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999584 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f56097-f68c-48fe-b455-33f6119871c5" containerName="mariadb-database-create" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999592 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f618d289-2120-4097-80bd-0cff83800ff8" containerName="mariadb-account-create-update" Dec 06 03:26:12 crc kubenswrapper[4801]: I1206 03:26:12.999600 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c4a0a8-727e-4e11-891e-c635069d7a91" containerName="mariadb-account-create-update" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.000303 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.016850 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9af5-account-create-update-45pgz"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.017833 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.027697 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m6dhf"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.031378 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.043930 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9af5-account-create-update-45pgz"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.050820 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mktw\" (UniqueName: \"kubernetes.io/projected/1a61607b-902e-4973-b703-ed7eb2b6939a-kube-api-access-4mktw\") pod \"cinder-db-create-m6dhf\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.050876 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzsjp\" (UniqueName: \"kubernetes.io/projected/dac6e360-d06f-4966-ad94-07325a1c4d0f-kube-api-access-kzsjp\") pod \"cinder-9af5-account-create-update-45pgz\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.050906 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61607b-902e-4973-b703-ed7eb2b6939a-operator-scripts\") pod \"cinder-db-create-m6dhf\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.050937 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac6e360-d06f-4966-ad94-07325a1c4d0f-operator-scripts\") pod \"cinder-9af5-account-create-update-45pgz\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.064000 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.098765 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-psmjd"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.099982 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.123726 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-psmjd"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152069 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61607b-902e-4973-b703-ed7eb2b6939a-operator-scripts\") pod \"cinder-db-create-m6dhf\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152132 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6437a4fc-969d-48ef-bc59-8115463e22b4-operator-scripts\") pod \"barbican-db-create-psmjd\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152155 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac6e360-d06f-4966-ad94-07325a1c4d0f-operator-scripts\") pod \"cinder-9af5-account-create-update-45pgz\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152235 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mktw\" (UniqueName: \"kubernetes.io/projected/1a61607b-902e-4973-b703-ed7eb2b6939a-kube-api-access-4mktw\") pod \"cinder-db-create-m6dhf\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzsjp\" (UniqueName: \"kubernetes.io/projected/dac6e360-d06f-4966-ad94-07325a1c4d0f-kube-api-access-kzsjp\") pod \"cinder-9af5-account-create-update-45pgz\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152298 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rt9t\" (UniqueName: \"kubernetes.io/projected/6437a4fc-969d-48ef-bc59-8115463e22b4-kube-api-access-2rt9t\") pod \"barbican-db-create-psmjd\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.152995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61607b-902e-4973-b703-ed7eb2b6939a-operator-scripts\") pod \"cinder-db-create-m6dhf\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.153134 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac6e360-d06f-4966-ad94-07325a1c4d0f-operator-scripts\") pod \"cinder-9af5-account-create-update-45pgz\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.170672 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzsjp\" (UniqueName: \"kubernetes.io/projected/dac6e360-d06f-4966-ad94-07325a1c4d0f-kube-api-access-kzsjp\") pod \"cinder-9af5-account-create-update-45pgz\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.172918 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mktw\" (UniqueName: \"kubernetes.io/projected/1a61607b-902e-4973-b703-ed7eb2b6939a-kube-api-access-4mktw\") pod \"cinder-db-create-m6dhf\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.210873 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-1d79-account-create-update-jx9mp"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.212534 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.214729 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.253574 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1d79-account-create-update-jx9mp"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.253784 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rt9t\" (UniqueName: \"kubernetes.io/projected/6437a4fc-969d-48ef-bc59-8115463e22b4-kube-api-access-2rt9t\") pod \"barbican-db-create-psmjd\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.253847 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6437a4fc-969d-48ef-bc59-8115463e22b4-operator-scripts\") pod \"barbican-db-create-psmjd\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.255356 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6437a4fc-969d-48ef-bc59-8115463e22b4-operator-scripts\") pod \"barbican-db-create-psmjd\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.269712 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rt9t\" (UniqueName: \"kubernetes.io/projected/6437a4fc-969d-48ef-bc59-8115463e22b4-kube-api-access-2rt9t\") pod \"barbican-db-create-psmjd\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.319394 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.345440 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.355423 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v8df\" (UniqueName: \"kubernetes.io/projected/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-kube-api-access-6v8df\") pod \"barbican-1d79-account-create-update-jx9mp\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.355701 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-operator-scripts\") pod \"barbican-1d79-account-create-update-jx9mp\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.409877 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fsgvd"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.410858 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.416185 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.446670 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fsgvd"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.457018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-operator-scripts\") pod \"barbican-1d79-account-create-update-jx9mp\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.457134 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v8df\" (UniqueName: \"kubernetes.io/projected/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-kube-api-access-6v8df\") pod \"barbican-1d79-account-create-update-jx9mp\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.458261 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-operator-scripts\") pod \"barbican-1d79-account-create-update-jx9mp\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.478771 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v8df\" (UniqueName: \"kubernetes.io/projected/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-kube-api-access-6v8df\") pod \"barbican-1d79-account-create-update-jx9mp\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.529638 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7af4-account-create-update-s5mjk"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.531086 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.534694 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.557609 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7af4-account-create-update-s5mjk"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.559301 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49252935-dea5-4610-9dec-31761dd3973a-operator-scripts\") pod \"neutron-db-create-fsgvd\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.559442 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcdn\" (UniqueName: \"kubernetes.io/projected/49252935-dea5-4610-9dec-31761dd3973a-kube-api-access-bpcdn\") pod \"neutron-db-create-fsgvd\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.564862 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.660816 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqxn6\" (UniqueName: \"kubernetes.io/projected/a79cca91-19cc-486a-82ad-698b4a88e673-kube-api-access-lqxn6\") pod \"neutron-7af4-account-create-update-s5mjk\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.660886 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79cca91-19cc-486a-82ad-698b4a88e673-operator-scripts\") pod \"neutron-7af4-account-create-update-s5mjk\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.660915 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49252935-dea5-4610-9dec-31761dd3973a-operator-scripts\") pod \"neutron-db-create-fsgvd\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.660978 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpcdn\" (UniqueName: \"kubernetes.io/projected/49252935-dea5-4610-9dec-31761dd3973a-kube-api-access-bpcdn\") pod \"neutron-db-create-fsgvd\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.662101 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49252935-dea5-4610-9dec-31761dd3973a-operator-scripts\") pod \"neutron-db-create-fsgvd\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.681081 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpcdn\" (UniqueName: \"kubernetes.io/projected/49252935-dea5-4610-9dec-31761dd3973a-kube-api-access-bpcdn\") pod \"neutron-db-create-fsgvd\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.741503 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.762561 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxn6\" (UniqueName: \"kubernetes.io/projected/a79cca91-19cc-486a-82ad-698b4a88e673-kube-api-access-lqxn6\") pod \"neutron-7af4-account-create-update-s5mjk\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.762647 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79cca91-19cc-486a-82ad-698b4a88e673-operator-scripts\") pod \"neutron-7af4-account-create-update-s5mjk\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.763410 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79cca91-19cc-486a-82ad-698b4a88e673-operator-scripts\") pod \"neutron-7af4-account-create-update-s5mjk\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.792413 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxn6\" (UniqueName: \"kubernetes.io/projected/a79cca91-19cc-486a-82ad-698b4a88e673-kube-api-access-lqxn6\") pod \"neutron-7af4-account-create-update-s5mjk\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.877870 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m6dhf"] Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.899078 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:13 crc kubenswrapper[4801]: I1206 03:26:13.955412 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9af5-account-create-update-45pgz"] Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.059820 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-psmjd"] Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.226088 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-1d79-account-create-update-jx9mp"] Dec 06 03:26:14 crc kubenswrapper[4801]: W1206 03:26:14.231198 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0ae239b_78c8_4b43_aa09_6ffb5b6deeea.slice/crio-267cfcea7991fe99e06a5651c3ba2229b979f9e0dbc975ac8d2bf23ac9975343 WatchSource:0}: Error finding container 267cfcea7991fe99e06a5651c3ba2229b979f9e0dbc975ac8d2bf23ac9975343: Status 404 returned error can't find the container with id 267cfcea7991fe99e06a5651c3ba2229b979f9e0dbc975ac8d2bf23ac9975343 Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.267262 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9af5-account-create-update-45pgz" event={"ID":"dac6e360-d06f-4966-ad94-07325a1c4d0f","Type":"ContainerStarted","Data":"d08aa1527fbf555da7bd3d4a95de8041d3b31baaa2be7587cb012d125b252c10"} Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.269485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-psmjd" event={"ID":"6437a4fc-969d-48ef-bc59-8115463e22b4","Type":"ContainerStarted","Data":"60019cee0c6775d28e51ae154bcd0c5aba78ed73a83dee740739dda613a1eb4f"} Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.273750 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6dhf" event={"ID":"1a61607b-902e-4973-b703-ed7eb2b6939a","Type":"ContainerStarted","Data":"64f774506a6226a283d49af5192c94bc6aa54d8f0250837bb8fc0549f8700017"} Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.275869 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1d79-account-create-update-jx9mp" event={"ID":"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea","Type":"ContainerStarted","Data":"267cfcea7991fe99e06a5651c3ba2229b979f9e0dbc975ac8d2bf23ac9975343"} Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.352559 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fsgvd"] Dec 06 03:26:14 crc kubenswrapper[4801]: W1206 03:26:14.359955 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49252935_dea5_4610_9dec_31761dd3973a.slice/crio-451ea39e05d52ff9a4bbc00be4c1f9c563c2c16bf124b7fda2800939beee52b5 WatchSource:0}: Error finding container 451ea39e05d52ff9a4bbc00be4c1f9c563c2c16bf124b7fda2800939beee52b5: Status 404 returned error can't find the container with id 451ea39e05d52ff9a4bbc00be4c1f9c563c2c16bf124b7fda2800939beee52b5 Dec 06 03:26:14 crc kubenswrapper[4801]: I1206 03:26:14.438324 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7af4-account-create-update-s5mjk"] Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.285509 4801 generic.go:334] "Generic (PLEG): container finished" podID="dac6e360-d06f-4966-ad94-07325a1c4d0f" containerID="75a4436168f6cc66bdd3be34116cfe66a10748e9f2cf143fb3a725afd6c45260" exitCode=0 Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.286108 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9af5-account-create-update-45pgz" event={"ID":"dac6e360-d06f-4966-ad94-07325a1c4d0f","Type":"ContainerDied","Data":"75a4436168f6cc66bdd3be34116cfe66a10748e9f2cf143fb3a725afd6c45260"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.287836 4801 generic.go:334] "Generic (PLEG): container finished" podID="6437a4fc-969d-48ef-bc59-8115463e22b4" containerID="e09c1cc3c628a8796457ec2b65b9565b1725c8a4bcc58380a9c17578cfe9d0f5" exitCode=0 Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.287886 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-psmjd" event={"ID":"6437a4fc-969d-48ef-bc59-8115463e22b4","Type":"ContainerDied","Data":"e09c1cc3c628a8796457ec2b65b9565b1725c8a4bcc58380a9c17578cfe9d0f5"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.295244 4801 generic.go:334] "Generic (PLEG): container finished" podID="1a61607b-902e-4973-b703-ed7eb2b6939a" containerID="65bf14665a73a7cbe3078a04fa23c750fd57a89a3bf2fe77174dfaa376b57dac" exitCode=0 Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.295342 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6dhf" event={"ID":"1a61607b-902e-4973-b703-ed7eb2b6939a","Type":"ContainerDied","Data":"65bf14665a73a7cbe3078a04fa23c750fd57a89a3bf2fe77174dfaa376b57dac"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.296736 4801 generic.go:334] "Generic (PLEG): container finished" podID="49252935-dea5-4610-9dec-31761dd3973a" containerID="0138be3dfb8cae6fb2cba0418d9d67a13727cb1b0ff08ac6f59245683c91ecd9" exitCode=0 Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.296834 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fsgvd" event={"ID":"49252935-dea5-4610-9dec-31761dd3973a","Type":"ContainerDied","Data":"0138be3dfb8cae6fb2cba0418d9d67a13727cb1b0ff08ac6f59245683c91ecd9"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.296866 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fsgvd" event={"ID":"49252935-dea5-4610-9dec-31761dd3973a","Type":"ContainerStarted","Data":"451ea39e05d52ff9a4bbc00be4c1f9c563c2c16bf124b7fda2800939beee52b5"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.300227 4801 generic.go:334] "Generic (PLEG): container finished" podID="e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" containerID="989c172c25b01dfcd500e5add5b83e1bc6de0ee008e772fad6e7b3f6b50dc52b" exitCode=0 Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.300271 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1d79-account-create-update-jx9mp" event={"ID":"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea","Type":"ContainerDied","Data":"989c172c25b01dfcd500e5add5b83e1bc6de0ee008e772fad6e7b3f6b50dc52b"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.301427 4801 generic.go:334] "Generic (PLEG): container finished" podID="a79cca91-19cc-486a-82ad-698b4a88e673" containerID="279b807e025a63fa76e0db0e0ce21c798fdaef2e865a1953ad2b1f8fe5b89875" exitCode=0 Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.301458 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7af4-account-create-update-s5mjk" event={"ID":"a79cca91-19cc-486a-82ad-698b4a88e673","Type":"ContainerDied","Data":"279b807e025a63fa76e0db0e0ce21c798fdaef2e865a1953ad2b1f8fe5b89875"} Dec 06 03:26:15 crc kubenswrapper[4801]: I1206 03:26:15.301474 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7af4-account-create-update-s5mjk" event={"ID":"a79cca91-19cc-486a-82ad-698b4a88e673","Type":"ContainerStarted","Data":"f36e51083938b0a82f5295ecd2b2ee2a90aec9740dadb67823c4f6106d2ffae2"} Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.129807 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-p4wzv"] Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.131185 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.134410 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.134815 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.135123 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.135182 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jb48d" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.146502 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-p4wzv"] Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.305547 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-combined-ca-bundle\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.305599 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-config-data\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.305666 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w485\" (UniqueName: \"kubernetes.io/projected/ca85741f-399a-4587-9f14-b972c56193e9-kube-api-access-7w485\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.409978 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-config-data\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.410062 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w485\" (UniqueName: \"kubernetes.io/projected/ca85741f-399a-4587-9f14-b972c56193e9-kube-api-access-7w485\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.410148 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-combined-ca-bundle\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.427871 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-combined-ca-bundle\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.428909 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w485\" (UniqueName: \"kubernetes.io/projected/ca85741f-399a-4587-9f14-b972c56193e9-kube-api-access-7w485\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.437352 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-config-data\") pod \"keystone-db-sync-p4wzv\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.449256 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.648277 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tp4d2"] Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.651562 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.652278 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.654187 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.676331 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wvkgz" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.684377 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tp4d2"] Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.773179 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.788909 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.800817 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.818341 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-operator-scripts\") pod \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.818465 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v8df\" (UniqueName: \"kubernetes.io/projected/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-kube-api-access-6v8df\") pod \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\" (UID: \"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.818907 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lj95\" (UniqueName: \"kubernetes.io/projected/861fdd2b-c39c-4122-94a2-8eb5744c1536-kube-api-access-4lj95\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.818986 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-config-data\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.819018 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-combined-ca-bundle\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.819101 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-db-sync-config-data\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.820115 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" (UID: "e0ae239b-78c8-4b43-aa09-6ffb5b6deeea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.820366 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.834237 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-kube-api-access-6v8df" (OuterVolumeSpecName: "kube-api-access-6v8df") pod "e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" (UID: "e0ae239b-78c8-4b43-aa09-6ffb5b6deeea"). InnerVolumeSpecName "kube-api-access-6v8df". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.919877 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79cca91-19cc-486a-82ad-698b4a88e673-operator-scripts\") pod \"a79cca91-19cc-486a-82ad-698b4a88e673\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.919926 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mktw\" (UniqueName: \"kubernetes.io/projected/1a61607b-902e-4973-b703-ed7eb2b6939a-kube-api-access-4mktw\") pod \"1a61607b-902e-4973-b703-ed7eb2b6939a\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.919970 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61607b-902e-4973-b703-ed7eb2b6939a-operator-scripts\") pod \"1a61607b-902e-4973-b703-ed7eb2b6939a\" (UID: \"1a61607b-902e-4973-b703-ed7eb2b6939a\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.919989 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6437a4fc-969d-48ef-bc59-8115463e22b4-operator-scripts\") pod \"6437a4fc-969d-48ef-bc59-8115463e22b4\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920004 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqxn6\" (UniqueName: \"kubernetes.io/projected/a79cca91-19cc-486a-82ad-698b4a88e673-kube-api-access-lqxn6\") pod \"a79cca91-19cc-486a-82ad-698b4a88e673\" (UID: \"a79cca91-19cc-486a-82ad-698b4a88e673\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920124 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac6e360-d06f-4966-ad94-07325a1c4d0f-operator-scripts\") pod \"dac6e360-d06f-4966-ad94-07325a1c4d0f\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920146 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rt9t\" (UniqueName: \"kubernetes.io/projected/6437a4fc-969d-48ef-bc59-8115463e22b4-kube-api-access-2rt9t\") pod \"6437a4fc-969d-48ef-bc59-8115463e22b4\" (UID: \"6437a4fc-969d-48ef-bc59-8115463e22b4\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920173 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzsjp\" (UniqueName: \"kubernetes.io/projected/dac6e360-d06f-4966-ad94-07325a1c4d0f-kube-api-access-kzsjp\") pod \"dac6e360-d06f-4966-ad94-07325a1c4d0f\" (UID: \"dac6e360-d06f-4966-ad94-07325a1c4d0f\") " Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920399 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lj95\" (UniqueName: \"kubernetes.io/projected/861fdd2b-c39c-4122-94a2-8eb5744c1536-kube-api-access-4lj95\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920449 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-config-data\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920469 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-combined-ca-bundle\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920489 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a79cca91-19cc-486a-82ad-698b4a88e673-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a79cca91-19cc-486a-82ad-698b4a88e673" (UID: "a79cca91-19cc-486a-82ad-698b4a88e673"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920538 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-db-sync-config-data\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920585 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v8df\" (UniqueName: \"kubernetes.io/projected/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-kube-api-access-6v8df\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920709 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a79cca91-19cc-486a-82ad-698b4a88e673-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920726 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.920934 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a61607b-902e-4973-b703-ed7eb2b6939a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a61607b-902e-4973-b703-ed7eb2b6939a" (UID: "1a61607b-902e-4973-b703-ed7eb2b6939a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.921022 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6437a4fc-969d-48ef-bc59-8115463e22b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6437a4fc-969d-48ef-bc59-8115463e22b4" (UID: "6437a4fc-969d-48ef-bc59-8115463e22b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.921504 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac6e360-d06f-4966-ad94-07325a1c4d0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac6e360-d06f-4966-ad94-07325a1c4d0f" (UID: "dac6e360-d06f-4966-ad94-07325a1c4d0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.921718 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.926113 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a79cca91-19cc-486a-82ad-698b4a88e673-kube-api-access-lqxn6" (OuterVolumeSpecName: "kube-api-access-lqxn6") pod "a79cca91-19cc-486a-82ad-698b4a88e673" (UID: "a79cca91-19cc-486a-82ad-698b4a88e673"). InnerVolumeSpecName "kube-api-access-lqxn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.926749 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-config-data\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.926828 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-db-sync-config-data\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.926912 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6437a4fc-969d-48ef-bc59-8115463e22b4-kube-api-access-2rt9t" (OuterVolumeSpecName: "kube-api-access-2rt9t") pod "6437a4fc-969d-48ef-bc59-8115463e22b4" (UID: "6437a4fc-969d-48ef-bc59-8115463e22b4"). InnerVolumeSpecName "kube-api-access-2rt9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.927512 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac6e360-d06f-4966-ad94-07325a1c4d0f-kube-api-access-kzsjp" (OuterVolumeSpecName: "kube-api-access-kzsjp") pod "dac6e360-d06f-4966-ad94-07325a1c4d0f" (UID: "dac6e360-d06f-4966-ad94-07325a1c4d0f"). InnerVolumeSpecName "kube-api-access-kzsjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.928150 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a61607b-902e-4973-b703-ed7eb2b6939a-kube-api-access-4mktw" (OuterVolumeSpecName: "kube-api-access-4mktw") pod "1a61607b-902e-4973-b703-ed7eb2b6939a" (UID: "1a61607b-902e-4973-b703-ed7eb2b6939a"). InnerVolumeSpecName "kube-api-access-4mktw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.935978 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-combined-ca-bundle\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:16 crc kubenswrapper[4801]: I1206 03:26:16.939369 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lj95\" (UniqueName: \"kubernetes.io/projected/861fdd2b-c39c-4122-94a2-8eb5744c1536-kube-api-access-4lj95\") pod \"glance-db-sync-tp4d2\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.000037 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tp4d2" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.021810 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpcdn\" (UniqueName: \"kubernetes.io/projected/49252935-dea5-4610-9dec-31761dd3973a-kube-api-access-bpcdn\") pod \"49252935-dea5-4610-9dec-31761dd3973a\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.021916 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49252935-dea5-4610-9dec-31761dd3973a-operator-scripts\") pod \"49252935-dea5-4610-9dec-31761dd3973a\" (UID: \"49252935-dea5-4610-9dec-31761dd3973a\") " Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022208 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mktw\" (UniqueName: \"kubernetes.io/projected/1a61607b-902e-4973-b703-ed7eb2b6939a-kube-api-access-4mktw\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022221 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a61607b-902e-4973-b703-ed7eb2b6939a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022231 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6437a4fc-969d-48ef-bc59-8115463e22b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022240 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqxn6\" (UniqueName: \"kubernetes.io/projected/a79cca91-19cc-486a-82ad-698b4a88e673-kube-api-access-lqxn6\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022248 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac6e360-d06f-4966-ad94-07325a1c4d0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022257 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rt9t\" (UniqueName: \"kubernetes.io/projected/6437a4fc-969d-48ef-bc59-8115463e22b4-kube-api-access-2rt9t\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022265 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzsjp\" (UniqueName: \"kubernetes.io/projected/dac6e360-d06f-4966-ad94-07325a1c4d0f-kube-api-access-kzsjp\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.022775 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49252935-dea5-4610-9dec-31761dd3973a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49252935-dea5-4610-9dec-31761dd3973a" (UID: "49252935-dea5-4610-9dec-31761dd3973a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.024909 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49252935-dea5-4610-9dec-31761dd3973a-kube-api-access-bpcdn" (OuterVolumeSpecName: "kube-api-access-bpcdn") pod "49252935-dea5-4610-9dec-31761dd3973a" (UID: "49252935-dea5-4610-9dec-31761dd3973a"). InnerVolumeSpecName "kube-api-access-bpcdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.101078 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-p4wzv"] Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.123435 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpcdn\" (UniqueName: \"kubernetes.io/projected/49252935-dea5-4610-9dec-31761dd3973a-kube-api-access-bpcdn\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.123456 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49252935-dea5-4610-9dec-31761dd3973a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.318016 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-1d79-account-create-update-jx9mp" event={"ID":"e0ae239b-78c8-4b43-aa09-6ffb5b6deeea","Type":"ContainerDied","Data":"267cfcea7991fe99e06a5651c3ba2229b979f9e0dbc975ac8d2bf23ac9975343"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.318501 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="267cfcea7991fe99e06a5651c3ba2229b979f9e0dbc975ac8d2bf23ac9975343" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.318090 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-1d79-account-create-update-jx9mp" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.323221 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7af4-account-create-update-s5mjk" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.323264 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7af4-account-create-update-s5mjk" event={"ID":"a79cca91-19cc-486a-82ad-698b4a88e673","Type":"ContainerDied","Data":"f36e51083938b0a82f5295ecd2b2ee2a90aec9740dadb67823c4f6106d2ffae2"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.323315 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36e51083938b0a82f5295ecd2b2ee2a90aec9740dadb67823c4f6106d2ffae2" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.326983 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9af5-account-create-update-45pgz" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.327022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9af5-account-create-update-45pgz" event={"ID":"dac6e360-d06f-4966-ad94-07325a1c4d0f","Type":"ContainerDied","Data":"d08aa1527fbf555da7bd3d4a95de8041d3b31baaa2be7587cb012d125b252c10"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.327353 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08aa1527fbf555da7bd3d4a95de8041d3b31baaa2be7587cb012d125b252c10" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.328308 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-p4wzv" event={"ID":"ca85741f-399a-4587-9f14-b972c56193e9","Type":"ContainerStarted","Data":"8d14c54e2833084ec589491516dc59edb275f686f273f9055139d05af631810b"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.330164 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-psmjd" event={"ID":"6437a4fc-969d-48ef-bc59-8115463e22b4","Type":"ContainerDied","Data":"60019cee0c6775d28e51ae154bcd0c5aba78ed73a83dee740739dda613a1eb4f"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.330187 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60019cee0c6775d28e51ae154bcd0c5aba78ed73a83dee740739dda613a1eb4f" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.330299 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-psmjd" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.335968 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6dhf" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.335980 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6dhf" event={"ID":"1a61607b-902e-4973-b703-ed7eb2b6939a","Type":"ContainerDied","Data":"64f774506a6226a283d49af5192c94bc6aa54d8f0250837bb8fc0549f8700017"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.336032 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f774506a6226a283d49af5192c94bc6aa54d8f0250837bb8fc0549f8700017" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.341194 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fsgvd" event={"ID":"49252935-dea5-4610-9dec-31761dd3973a","Type":"ContainerDied","Data":"451ea39e05d52ff9a4bbc00be4c1f9c563c2c16bf124b7fda2800939beee52b5"} Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.341235 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451ea39e05d52ff9a4bbc00be4c1f9c563c2c16bf124b7fda2800939beee52b5" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.341278 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fsgvd" Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.501249 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tp4d2"] Dec 06 03:26:17 crc kubenswrapper[4801]: W1206 03:26:17.502264 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod861fdd2b_c39c_4122_94a2_8eb5744c1536.slice/crio-bd00d19b892bcd43708cc527abe0ed1974dca8407cd9550cb256c868ae5aaf3a WatchSource:0}: Error finding container bd00d19b892bcd43708cc527abe0ed1974dca8407cd9550cb256c868ae5aaf3a: Status 404 returned error can't find the container with id bd00d19b892bcd43708cc527abe0ed1974dca8407cd9550cb256c868ae5aaf3a Dec 06 03:26:17 crc kubenswrapper[4801]: I1206 03:26:17.922394 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 06 03:26:18 crc kubenswrapper[4801]: I1206 03:26:18.351403 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tp4d2" event={"ID":"861fdd2b-c39c-4122-94a2-8eb5744c1536","Type":"ContainerStarted","Data":"bd00d19b892bcd43708cc527abe0ed1974dca8407cd9550cb256c868ae5aaf3a"} Dec 06 03:26:21 crc kubenswrapper[4801]: I1206 03:26:21.383949 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qqlb5" podUID="eefe8d7e-f739-42c8-88fb-2c27a8630e8b" containerName="ovn-controller" probeResult="failure" output=< Dec 06 03:26:21 crc kubenswrapper[4801]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 03:26:21 crc kubenswrapper[4801]: > Dec 06 03:26:25 crc kubenswrapper[4801]: I1206 03:26:25.420809 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-p4wzv" event={"ID":"ca85741f-399a-4587-9f14-b972c56193e9","Type":"ContainerStarted","Data":"62970a80dea1ea2fdd943513381e359a1789f4745e68d11c45dad0c338e5c77b"} Dec 06 03:26:25 crc kubenswrapper[4801]: I1206 03:26:25.447001 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-p4wzv" podStartSLOduration=2.119308518 podStartE2EDuration="9.446977084s" podCreationTimestamp="2025-12-06 03:26:16 +0000 UTC" firstStartedPulling="2025-12-06 03:26:17.120943115 +0000 UTC m=+1230.243550687" lastFinishedPulling="2025-12-06 03:26:24.448611681 +0000 UTC m=+1237.571219253" observedRunningTime="2025-12-06 03:26:25.441302142 +0000 UTC m=+1238.563909734" watchObservedRunningTime="2025-12-06 03:26:25.446977084 +0000 UTC m=+1238.569584656" Dec 06 03:26:26 crc kubenswrapper[4801]: I1206 03:26:26.389613 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qqlb5" podUID="eefe8d7e-f739-42c8-88fb-2c27a8630e8b" containerName="ovn-controller" probeResult="failure" output=< Dec 06 03:26:26 crc kubenswrapper[4801]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 03:26:26 crc kubenswrapper[4801]: > Dec 06 03:26:31 crc kubenswrapper[4801]: I1206 03:26:31.387090 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qqlb5" podUID="eefe8d7e-f739-42c8-88fb-2c27a8630e8b" containerName="ovn-controller" probeResult="failure" output=< Dec 06 03:26:31 crc kubenswrapper[4801]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 03:26:31 crc kubenswrapper[4801]: > Dec 06 03:26:31 crc kubenswrapper[4801]: I1206 03:26:31.472422 4801 generic.go:334] "Generic (PLEG): container finished" podID="ca85741f-399a-4587-9f14-b972c56193e9" containerID="62970a80dea1ea2fdd943513381e359a1789f4745e68d11c45dad0c338e5c77b" exitCode=0 Dec 06 03:26:31 crc kubenswrapper[4801]: I1206 03:26:31.472463 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-p4wzv" event={"ID":"ca85741f-399a-4587-9f14-b972c56193e9","Type":"ContainerDied","Data":"62970a80dea1ea2fdd943513381e359a1789f4745e68d11c45dad0c338e5c77b"} Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.481890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tp4d2" event={"ID":"861fdd2b-c39c-4122-94a2-8eb5744c1536","Type":"ContainerStarted","Data":"096ceb38bfb155296281785c717f33b9461f7bd30de59f0b078cb14d3fac60d6"} Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.506661 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tp4d2" podStartSLOduration=2.877793256 podStartE2EDuration="16.506641445s" podCreationTimestamp="2025-12-06 03:26:16 +0000 UTC" firstStartedPulling="2025-12-06 03:26:17.50579446 +0000 UTC m=+1230.628402042" lastFinishedPulling="2025-12-06 03:26:31.134642659 +0000 UTC m=+1244.257250231" observedRunningTime="2025-12-06 03:26:32.501667532 +0000 UTC m=+1245.624275104" watchObservedRunningTime="2025-12-06 03:26:32.506641445 +0000 UTC m=+1245.629249017" Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.788822 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.898771 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-combined-ca-bundle\") pod \"ca85741f-399a-4587-9f14-b972c56193e9\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.898876 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w485\" (UniqueName: \"kubernetes.io/projected/ca85741f-399a-4587-9f14-b972c56193e9-kube-api-access-7w485\") pod \"ca85741f-399a-4587-9f14-b972c56193e9\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.898998 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-config-data\") pod \"ca85741f-399a-4587-9f14-b972c56193e9\" (UID: \"ca85741f-399a-4587-9f14-b972c56193e9\") " Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.920306 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca85741f-399a-4587-9f14-b972c56193e9-kube-api-access-7w485" (OuterVolumeSpecName: "kube-api-access-7w485") pod "ca85741f-399a-4587-9f14-b972c56193e9" (UID: "ca85741f-399a-4587-9f14-b972c56193e9"). InnerVolumeSpecName "kube-api-access-7w485". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.930745 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca85741f-399a-4587-9f14-b972c56193e9" (UID: "ca85741f-399a-4587-9f14-b972c56193e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:26:32 crc kubenswrapper[4801]: I1206 03:26:32.958215 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-config-data" (OuterVolumeSpecName: "config-data") pod "ca85741f-399a-4587-9f14-b972c56193e9" (UID: "ca85741f-399a-4587-9f14-b972c56193e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.000721 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w485\" (UniqueName: \"kubernetes.io/projected/ca85741f-399a-4587-9f14-b972c56193e9-kube-api-access-7w485\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.000774 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.000785 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca85741f-399a-4587-9f14-b972c56193e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.491411 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-p4wzv" event={"ID":"ca85741f-399a-4587-9f14-b972c56193e9","Type":"ContainerDied","Data":"8d14c54e2833084ec589491516dc59edb275f686f273f9055139d05af631810b"} Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.491453 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d14c54e2833084ec589491516dc59edb275f686f273f9055139d05af631810b" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.491469 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-p4wzv" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.746638 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-cm6pr"] Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747459 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747481 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747496 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a61607b-902e-4973-b703-ed7eb2b6939a" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747503 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a61607b-902e-4973-b703-ed7eb2b6939a" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747520 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a79cca91-19cc-486a-82ad-698b4a88e673" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747558 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a79cca91-19cc-486a-82ad-698b4a88e673" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747578 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49252935-dea5-4610-9dec-31761dd3973a" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747586 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="49252935-dea5-4610-9dec-31761dd3973a" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747613 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6437a4fc-969d-48ef-bc59-8115463e22b4" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747620 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6437a4fc-969d-48ef-bc59-8115463e22b4" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747633 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac6e360-d06f-4966-ad94-07325a1c4d0f" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747640 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac6e360-d06f-4966-ad94-07325a1c4d0f" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: E1206 03:26:33.747648 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85741f-399a-4587-9f14-b972c56193e9" containerName="keystone-db-sync" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747654 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85741f-399a-4587-9f14-b972c56193e9" containerName="keystone-db-sync" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747829 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a79cca91-19cc-486a-82ad-698b4a88e673" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747850 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6437a4fc-969d-48ef-bc59-8115463e22b4" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747860 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a61607b-902e-4973-b703-ed7eb2b6939a" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747873 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747883 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85741f-399a-4587-9f14-b972c56193e9" containerName="keystone-db-sync" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747894 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac6e360-d06f-4966-ad94-07325a1c4d0f" containerName="mariadb-account-create-update" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.747901 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="49252935-dea5-4610-9dec-31761dd3973a" containerName="mariadb-database-create" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.752115 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.792950 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vqq8s"] Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.794452 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.796868 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.797203 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.797400 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jb48d" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.797481 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.797776 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.804265 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-cm6pr"] Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.815326 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-config\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.815400 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.815472 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx9pz\" (UniqueName: \"kubernetes.io/projected/fe18c105-d8ef-4a41-bc06-e0af15f681c7-kube-api-access-gx9pz\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.815516 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.815551 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.836883 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vqq8s"] Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916625 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx9pz\" (UniqueName: \"kubernetes.io/projected/fe18c105-d8ef-4a41-bc06-e0af15f681c7-kube-api-access-gx9pz\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916696 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-fernet-keys\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916728 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-credential-keys\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916770 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpbhf\" (UniqueName: \"kubernetes.io/projected/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-kube-api-access-hpbhf\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916814 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916857 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916887 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-config-data\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916920 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-scripts\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.916985 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-config\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.917027 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.917063 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-combined-ca-bundle\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.918432 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-sb\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.919072 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-config\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.919169 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-nb\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.919575 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-dns-svc\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:33 crc kubenswrapper[4801]: I1206 03:26:33.962988 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx9pz\" (UniqueName: \"kubernetes.io/projected/fe18c105-d8ef-4a41-bc06-e0af15f681c7-kube-api-access-gx9pz\") pod \"dnsmasq-dns-75bb4695fc-cm6pr\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.018056 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-fernet-keys\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.018243 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-credential-keys\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.018303 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpbhf\" (UniqueName: \"kubernetes.io/projected/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-kube-api-access-hpbhf\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.018466 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-config-data\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.018531 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-scripts\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.018797 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-combined-ca-bundle\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.041689 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-config-data\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.042733 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-fernet-keys\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.047880 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-credential-keys\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.048368 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-combined-ca-bundle\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.048565 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-scripts\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.054432 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpbhf\" (UniqueName: \"kubernetes.io/projected/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-kube-api-access-hpbhf\") pod \"keystone-bootstrap-vqq8s\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.071455 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.090349 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-8db5z"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.091866 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.105398 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7s22l"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.106469 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.124880 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.125031 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.125128 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-slq6m" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.125337 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f6gpr" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.125474 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.125837 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.130230 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.146020 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.148665 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.157490 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.157675 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.161508 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8db5z"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.178472 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7s22l"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.197771 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-qwr7p"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.198959 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.209554 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.221360 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wxs88" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235268 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzpc\" (UniqueName: \"kubernetes.io/projected/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-kube-api-access-rpzpc\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235320 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235355 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-config\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235379 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8mq\" (UniqueName: \"kubernetes.io/projected/28d06e7d-a469-4050-9f2c-db9da8389c58-kube-api-access-xq8mq\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235405 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-config-data\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235420 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-scripts\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235437 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-combined-ca-bundle\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235469 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-run-httpd\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235486 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-db-sync-config-data\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-combined-ca-bundle\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235522 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-config-data\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235542 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-log-httpd\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235562 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-scripts\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235609 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235642 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-etc-machine-id\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.235660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px9vw\" (UniqueName: \"kubernetes.io/projected/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-kube-api-access-px9vw\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.266418 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.308916 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qwr7p"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.337898 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-db-sync-config-data\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.337944 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-combined-ca-bundle\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.337976 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-config-data\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.338007 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-log-httpd\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.338042 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-scripts\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.349392 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-log-httpd\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354310 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwpt\" (UniqueName: \"kubernetes.io/projected/3842042e-a4c9-4f33-bda5-b11f58a69519-kube-api-access-snwpt\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354456 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354533 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-etc-machine-id\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354656 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px9vw\" (UniqueName: \"kubernetes.io/projected/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-kube-api-access-px9vw\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpzpc\" (UniqueName: \"kubernetes.io/projected/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-kube-api-access-rpzpc\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354785 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.354980 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-config\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.356368 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-etc-machine-id\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358336 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-db-sync-config-data\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358370 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8mq\" (UniqueName: \"kubernetes.io/projected/28d06e7d-a469-4050-9f2c-db9da8389c58-kube-api-access-xq8mq\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358398 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-combined-ca-bundle\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358460 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-config-data\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358481 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-scripts\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358503 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-combined-ca-bundle\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358577 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-run-httpd\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.358944 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-run-httpd\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.382592 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.387881 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-db-sync-config-data\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.388969 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-config\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.395455 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-scripts\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.397301 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.397939 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-config-data\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.399479 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-config-data\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.401782 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-scripts\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.402161 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8mq\" (UniqueName: \"kubernetes.io/projected/28d06e7d-a469-4050-9f2c-db9da8389c58-kube-api-access-xq8mq\") pod \"ceilometer-0\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.404099 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-cm6pr"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.404683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-combined-ca-bundle\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.405573 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-combined-ca-bundle\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.412239 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px9vw\" (UniqueName: \"kubernetes.io/projected/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-kube-api-access-px9vw\") pod \"cinder-db-sync-7s22l\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.434141 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jbzbp"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.438726 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpzpc\" (UniqueName: \"kubernetes.io/projected/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-kube-api-access-rpzpc\") pod \"neutron-db-sync-8db5z\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.439708 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.449128 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.449351 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-g9xd9" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.449533 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.464160 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-db-sync-config-data\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.465806 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-combined-ca-bundle\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.466050 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snwpt\" (UniqueName: \"kubernetes.io/projected/3842042e-a4c9-4f33-bda5-b11f58a69519-kube-api-access-snwpt\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.470021 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jbzbp"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.480156 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-combined-ca-bundle\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.485194 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-db-sync-config-data\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.487140 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snwpt\" (UniqueName: \"kubernetes.io/projected/3842042e-a4c9-4f33-bda5-b11f58a69519-kube-api-access-snwpt\") pod \"barbican-db-sync-qwr7p\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.487410 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-t8cjh"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.488885 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.502974 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-t8cjh"] Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.556747 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8db5z" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569175 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569273 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-combined-ca-bundle\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569307 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569326 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569529 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-config-data\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569648 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed8b95a-e314-4ab9-91f4-06df2649e614-logs\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569687 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-scripts\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569741 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpstw\" (UniqueName: \"kubernetes.io/projected/eed8b95a-e314-4ab9-91f4-06df2649e614-kube-api-access-lpstw\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569797 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-config\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.569869 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhcjn\" (UniqueName: \"kubernetes.io/projected/e411b0df-3e92-41a9-a26b-0dea6c28cb97-kube-api-access-rhcjn\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.570641 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7s22l" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.610517 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.659996 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.672120 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-combined-ca-bundle\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.672915 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.672948 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673023 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-config-data\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673070 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed8b95a-e314-4ab9-91f4-06df2649e614-logs\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673096 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-scripts\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673121 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpstw\" (UniqueName: \"kubernetes.io/projected/eed8b95a-e314-4ab9-91f4-06df2649e614-kube-api-access-lpstw\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673143 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-config\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673177 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhcjn\" (UniqueName: \"kubernetes.io/projected/e411b0df-3e92-41a9-a26b-0dea6c28cb97-kube-api-access-rhcjn\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.673220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.674453 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed8b95a-e314-4ab9-91f4-06df2649e614-logs\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.678506 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-nb\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.679988 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-config\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.681639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-combined-ca-bundle\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.682013 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-scripts\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.682916 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-sb\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.683447 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-dns-svc\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.697508 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpstw\" (UniqueName: \"kubernetes.io/projected/eed8b95a-e314-4ab9-91f4-06df2649e614-kube-api-access-lpstw\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.697588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-config-data\") pod \"placement-db-sync-jbzbp\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.705987 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhcjn\" (UniqueName: \"kubernetes.io/projected/e411b0df-3e92-41a9-a26b-0dea6c28cb97-kube-api-access-rhcjn\") pod \"dnsmasq-dns-745b9ddc8c-t8cjh\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.786525 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jbzbp" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.793675 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-cm6pr"] Dec 06 03:26:34 crc kubenswrapper[4801]: W1206 03:26:34.807189 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe18c105_d8ef_4a41_bc06_e0af15f681c7.slice/crio-44eea3d046f4358dd49765bbc3091cff4794b5f13eb68ca4b96083720f3a5337 WatchSource:0}: Error finding container 44eea3d046f4358dd49765bbc3091cff4794b5f13eb68ca4b96083720f3a5337: Status 404 returned error can't find the container with id 44eea3d046f4358dd49765bbc3091cff4794b5f13eb68ca4b96083720f3a5337 Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.823187 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:34 crc kubenswrapper[4801]: I1206 03:26:34.920567 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vqq8s"] Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.088062 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7s22l"] Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.248084 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.270246 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-qwr7p"] Dec 06 03:26:35 crc kubenswrapper[4801]: W1206 03:26:35.283939 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3842042e_a4c9_4f33_bda5_b11f58a69519.slice/crio-b3e28f041c569a2453003f21f6d0ddc6b597be2de40f1cd3f0a4c7e52029dd76 WatchSource:0}: Error finding container b3e28f041c569a2453003f21f6d0ddc6b597be2de40f1cd3f0a4c7e52029dd76: Status 404 returned error can't find the container with id b3e28f041c569a2453003f21f6d0ddc6b597be2de40f1cd3f0a4c7e52029dd76 Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.285159 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-8db5z"] Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.396999 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jbzbp"] Dec 06 03:26:35 crc kubenswrapper[4801]: W1206 03:26:35.410129 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeed8b95a_e314_4ab9_91f4_06df2649e614.slice/crio-11515578d236b19b4e15ed46199f337caf03884a2d1ed4bb8383858399831e94 WatchSource:0}: Error finding container 11515578d236b19b4e15ed46199f337caf03884a2d1ed4bb8383858399831e94: Status 404 returned error can't find the container with id 11515578d236b19b4e15ed46199f337caf03884a2d1ed4bb8383858399831e94 Dec 06 03:26:35 crc kubenswrapper[4801]: W1206 03:26:35.510360 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode411b0df_3e92_41a9_a26b_0dea6c28cb97.slice/crio-7aa3d5d12d7413e6992a318f9efe64fdd22824ba3db8f5a1629343b3c7b3f3ed WatchSource:0}: Error finding container 7aa3d5d12d7413e6992a318f9efe64fdd22824ba3db8f5a1629343b3c7b3f3ed: Status 404 returned error can't find the container with id 7aa3d5d12d7413e6992a318f9efe64fdd22824ba3db8f5a1629343b3c7b3f3ed Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.516611 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8db5z" event={"ID":"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb","Type":"ContainerStarted","Data":"35cda055fe4013403cd51980f0aeef3b2ca2bf644acb48778157611ca7ec21c7"} Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.522177 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d06e7d-a469-4050-9f2c-db9da8389c58","Type":"ContainerStarted","Data":"e1e8e337f5686a69831b912a0d04945b422a9aed85a1e51729e9b7ff8def905a"} Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.524042 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jbzbp" event={"ID":"eed8b95a-e314-4ab9-91f4-06df2649e614","Type":"ContainerStarted","Data":"11515578d236b19b4e15ed46199f337caf03884a2d1ed4bb8383858399831e94"} Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.528942 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-t8cjh"] Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.529165 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7s22l" event={"ID":"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f","Type":"ContainerStarted","Data":"71a9b50d9d8fe688a91fd1d49ea5ae459f7515a9d16c97b0e98dfee06d94a963"} Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.536271 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqq8s" event={"ID":"c7e7b425-a348-486a-8ea9-b1d315c8cc7f","Type":"ContainerStarted","Data":"53536c3b03d5f55e32023415af8e35a1c121039483e0c13d0e694a213ef4bb71"} Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.537358 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" event={"ID":"fe18c105-d8ef-4a41-bc06-e0af15f681c7","Type":"ContainerStarted","Data":"44eea3d046f4358dd49765bbc3091cff4794b5f13eb68ca4b96083720f3a5337"} Dec 06 03:26:35 crc kubenswrapper[4801]: I1206 03:26:35.538143 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qwr7p" event={"ID":"3842042e-a4c9-4f33-bda5-b11f58a69519","Type":"ContainerStarted","Data":"b3e28f041c569a2453003f21f6d0ddc6b597be2de40f1cd3f0a4c7e52029dd76"} Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:35.999102 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.421624 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qqlb5" podUID="eefe8d7e-f739-42c8-88fb-2c27a8630e8b" containerName="ovn-controller" probeResult="failure" output=< Dec 06 03:26:36 crc kubenswrapper[4801]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 06 03:26:36 crc kubenswrapper[4801]: > Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.493946 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.494039 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-44f28" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.571260 4801 generic.go:334] "Generic (PLEG): container finished" podID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerID="960c690df593d1971022c8bccd181fdceb0b3241c7c73ab56a7bba18ca9eed49" exitCode=0 Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.571367 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" event={"ID":"e411b0df-3e92-41a9-a26b-0dea6c28cb97","Type":"ContainerDied","Data":"960c690df593d1971022c8bccd181fdceb0b3241c7c73ab56a7bba18ca9eed49"} Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.571400 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" event={"ID":"e411b0df-3e92-41a9-a26b-0dea6c28cb97","Type":"ContainerStarted","Data":"7aa3d5d12d7413e6992a318f9efe64fdd22824ba3db8f5a1629343b3c7b3f3ed"} Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.614074 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqq8s" event={"ID":"c7e7b425-a348-486a-8ea9-b1d315c8cc7f","Type":"ContainerStarted","Data":"668456f1164b915b3e42353b758c36cd1cb7387c65f5316882577c7fd8740195"} Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.630925 4801 generic.go:334] "Generic (PLEG): container finished" podID="fe18c105-d8ef-4a41-bc06-e0af15f681c7" containerID="9faf4c0e381eeaef6619f72e42e7168e98de66ed9a01a8e59247793a4d1a4f9d" exitCode=0 Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.631035 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" event={"ID":"fe18c105-d8ef-4a41-bc06-e0af15f681c7","Type":"ContainerDied","Data":"9faf4c0e381eeaef6619f72e42e7168e98de66ed9a01a8e59247793a4d1a4f9d"} Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.634907 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vqq8s" podStartSLOduration=3.634884471 podStartE2EDuration="3.634884471s" podCreationTimestamp="2025-12-06 03:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:26:36.631255765 +0000 UTC m=+1249.753863347" watchObservedRunningTime="2025-12-06 03:26:36.634884471 +0000 UTC m=+1249.757492043" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.646126 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8db5z" event={"ID":"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb","Type":"ContainerStarted","Data":"b929d467eea811d7bb7b6b5814208db044bb55663ad964587efa6bd04d133433"} Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.708686 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-8db5z" podStartSLOduration=2.708659765 podStartE2EDuration="2.708659765s" podCreationTimestamp="2025-12-06 03:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:26:36.673469714 +0000 UTC m=+1249.796077286" watchObservedRunningTime="2025-12-06 03:26:36.708659765 +0000 UTC m=+1249.831267327" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.742734 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qqlb5-config-pl6f5"] Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.743992 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.746171 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.755221 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqlb5-config-pl6f5"] Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.854908 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-scripts\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.854983 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-log-ovn\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.855109 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-additional-scripts\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.855273 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run-ovn\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.855368 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qdl\" (UniqueName: \"kubernetes.io/projected/2b067d94-ea75-4101-9b13-b4808b49d3a9-kube-api-access-s5qdl\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.855395 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959116 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qdl\" (UniqueName: \"kubernetes.io/projected/2b067d94-ea75-4101-9b13-b4808b49d3a9-kube-api-access-s5qdl\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959174 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959270 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-scripts\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959322 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-log-ovn\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959368 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-additional-scripts\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959419 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run-ovn\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959795 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run-ovn\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.959868 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-log-ovn\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.960536 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-additional-scripts\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.960678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.962249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-scripts\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:36 crc kubenswrapper[4801]: I1206 03:26:36.986239 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qdl\" (UniqueName: \"kubernetes.io/projected/2b067d94-ea75-4101-9b13-b4808b49d3a9-kube-api-access-s5qdl\") pod \"ovn-controller-qqlb5-config-pl6f5\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.145855 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.154800 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.271859 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-nb\") pod \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.271939 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-dns-svc\") pod \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.272015 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-sb\") pod \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.272111 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx9pz\" (UniqueName: \"kubernetes.io/projected/fe18c105-d8ef-4a41-bc06-e0af15f681c7-kube-api-access-gx9pz\") pod \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.272206 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-config\") pod \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\" (UID: \"fe18c105-d8ef-4a41-bc06-e0af15f681c7\") " Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.282723 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe18c105-d8ef-4a41-bc06-e0af15f681c7-kube-api-access-gx9pz" (OuterVolumeSpecName: "kube-api-access-gx9pz") pod "fe18c105-d8ef-4a41-bc06-e0af15f681c7" (UID: "fe18c105-d8ef-4a41-bc06-e0af15f681c7"). InnerVolumeSpecName "kube-api-access-gx9pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.301536 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-config" (OuterVolumeSpecName: "config") pod "fe18c105-d8ef-4a41-bc06-e0af15f681c7" (UID: "fe18c105-d8ef-4a41-bc06-e0af15f681c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.319922 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fe18c105-d8ef-4a41-bc06-e0af15f681c7" (UID: "fe18c105-d8ef-4a41-bc06-e0af15f681c7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.321178 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fe18c105-d8ef-4a41-bc06-e0af15f681c7" (UID: "fe18c105-d8ef-4a41-bc06-e0af15f681c7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.328476 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fe18c105-d8ef-4a41-bc06-e0af15f681c7" (UID: "fe18c105-d8ef-4a41-bc06-e0af15f681c7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.374296 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx9pz\" (UniqueName: \"kubernetes.io/projected/fe18c105-d8ef-4a41-bc06-e0af15f681c7-kube-api-access-gx9pz\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.374335 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.374345 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.374353 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.374362 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fe18c105-d8ef-4a41-bc06-e0af15f681c7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.661561 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.661567 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75bb4695fc-cm6pr" event={"ID":"fe18c105-d8ef-4a41-bc06-e0af15f681c7","Type":"ContainerDied","Data":"44eea3d046f4358dd49765bbc3091cff4794b5f13eb68ca4b96083720f3a5337"} Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.662398 4801 scope.go:117] "RemoveContainer" containerID="9faf4c0e381eeaef6619f72e42e7168e98de66ed9a01a8e59247793a4d1a4f9d" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.667965 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" event={"ID":"e411b0df-3e92-41a9-a26b-0dea6c28cb97","Type":"ContainerStarted","Data":"6596d84f87e27602be7bbf8c835e1e3c73429d39fb031cf2eb171e7d3fe2c58d"} Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.668545 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.696834 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" podStartSLOduration=3.696809634 podStartE2EDuration="3.696809634s" podCreationTimestamp="2025-12-06 03:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:26:37.686878498 +0000 UTC m=+1250.809486070" watchObservedRunningTime="2025-12-06 03:26:37.696809634 +0000 UTC m=+1250.819417206" Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.737443 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-cm6pr"] Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.771800 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75bb4695fc-cm6pr"] Dec 06 03:26:37 crc kubenswrapper[4801]: I1206 03:26:37.786489 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqlb5-config-pl6f5"] Dec 06 03:26:38 crc kubenswrapper[4801]: I1206 03:26:38.677390 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-pl6f5" event={"ID":"2b067d94-ea75-4101-9b13-b4808b49d3a9","Type":"ContainerStarted","Data":"3a1bdd757a2dda20fb9152229e4308f7c32211ff4abdcf77fe5f28784d7d80c5"} Dec 06 03:26:38 crc kubenswrapper[4801]: I1206 03:26:38.677968 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-pl6f5" event={"ID":"2b067d94-ea75-4101-9b13-b4808b49d3a9","Type":"ContainerStarted","Data":"deda536e8e1c314ecbd5ff767713ed7dc264dcadae32c332d7fb88a4121f8771"} Dec 06 03:26:38 crc kubenswrapper[4801]: I1206 03:26:38.702312 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qqlb5-config-pl6f5" podStartSLOduration=2.702291777 podStartE2EDuration="2.702291777s" podCreationTimestamp="2025-12-06 03:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:26:38.701018533 +0000 UTC m=+1251.823626115" watchObservedRunningTime="2025-12-06 03:26:38.702291777 +0000 UTC m=+1251.824899369" Dec 06 03:26:39 crc kubenswrapper[4801]: I1206 03:26:39.280740 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe18c105-d8ef-4a41-bc06-e0af15f681c7" path="/var/lib/kubelet/pods/fe18c105-d8ef-4a41-bc06-e0af15f681c7/volumes" Dec 06 03:26:39 crc kubenswrapper[4801]: I1206 03:26:39.689836 4801 generic.go:334] "Generic (PLEG): container finished" podID="2b067d94-ea75-4101-9b13-b4808b49d3a9" containerID="3a1bdd757a2dda20fb9152229e4308f7c32211ff4abdcf77fe5f28784d7d80c5" exitCode=0 Dec 06 03:26:39 crc kubenswrapper[4801]: I1206 03:26:39.690134 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-pl6f5" event={"ID":"2b067d94-ea75-4101-9b13-b4808b49d3a9","Type":"ContainerDied","Data":"3a1bdd757a2dda20fb9152229e4308f7c32211ff4abdcf77fe5f28784d7d80c5"} Dec 06 03:26:40 crc kubenswrapper[4801]: I1206 03:26:40.701045 4801 generic.go:334] "Generic (PLEG): container finished" podID="c7e7b425-a348-486a-8ea9-b1d315c8cc7f" containerID="668456f1164b915b3e42353b758c36cd1cb7387c65f5316882577c7fd8740195" exitCode=0 Dec 06 03:26:40 crc kubenswrapper[4801]: I1206 03:26:40.701222 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqq8s" event={"ID":"c7e7b425-a348-486a-8ea9-b1d315c8cc7f","Type":"ContainerDied","Data":"668456f1164b915b3e42353b758c36cd1cb7387c65f5316882577c7fd8740195"} Dec 06 03:26:41 crc kubenswrapper[4801]: I1206 03:26:41.390013 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qqlb5" Dec 06 03:26:44 crc kubenswrapper[4801]: I1206 03:26:44.825681 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:26:44 crc kubenswrapper[4801]: I1206 03:26:44.879496 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bltz2"] Dec 06 03:26:44 crc kubenswrapper[4801]: I1206 03:26:44.879838 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" containerID="cri-o://6041169c19cf2bf18daf8d421196f1239abaf9ecca544873e57413d049746847" gracePeriod=10 Dec 06 03:26:45 crc kubenswrapper[4801]: I1206 03:26:45.828985 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Dec 06 03:26:48 crc kubenswrapper[4801]: I1206 03:26:48.773915 4801 generic.go:334] "Generic (PLEG): container finished" podID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerID="6041169c19cf2bf18daf8d421196f1239abaf9ecca544873e57413d049746847" exitCode=0 Dec 06 03:26:48 crc kubenswrapper[4801]: I1206 03:26:48.774420 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" event={"ID":"9b0adc6a-9d6a-4226-b807-79f3d905925a","Type":"ContainerDied","Data":"6041169c19cf2bf18daf8d421196f1239abaf9ecca544873e57413d049746847"} Dec 06 03:26:50 crc kubenswrapper[4801]: I1206 03:26:50.828334 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Dec 06 03:26:55 crc kubenswrapper[4801]: I1206 03:26:55.828615 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Dec 06 03:26:55 crc kubenswrapper[4801]: I1206 03:26:55.829914 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:27:00 crc kubenswrapper[4801]: I1206 03:27:00.828880 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.217238 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.225428 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309276 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-combined-ca-bundle\") pod \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309325 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-credential-keys\") pod \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309357 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-additional-scripts\") pod \"2b067d94-ea75-4101-9b13-b4808b49d3a9\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309390 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-scripts\") pod \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309415 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-fernet-keys\") pod \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309485 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpbhf\" (UniqueName: \"kubernetes.io/projected/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-kube-api-access-hpbhf\") pod \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.309511 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-config-data\") pod \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\" (UID: \"c7e7b425-a348-486a-8ea9-b1d315c8cc7f\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.310402 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "2b067d94-ea75-4101-9b13-b4808b49d3a9" (UID: "2b067d94-ea75-4101-9b13-b4808b49d3a9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.311316 4801 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.316376 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c7e7b425-a348-486a-8ea9-b1d315c8cc7f" (UID: "c7e7b425-a348-486a-8ea9-b1d315c8cc7f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.316684 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-scripts" (OuterVolumeSpecName: "scripts") pod "c7e7b425-a348-486a-8ea9-b1d315c8cc7f" (UID: "c7e7b425-a348-486a-8ea9-b1d315c8cc7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.317909 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-kube-api-access-hpbhf" (OuterVolumeSpecName: "kube-api-access-hpbhf") pod "c7e7b425-a348-486a-8ea9-b1d315c8cc7f" (UID: "c7e7b425-a348-486a-8ea9-b1d315c8cc7f"). InnerVolumeSpecName "kube-api-access-hpbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.318022 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c7e7b425-a348-486a-8ea9-b1d315c8cc7f" (UID: "c7e7b425-a348-486a-8ea9-b1d315c8cc7f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.335903 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-config-data" (OuterVolumeSpecName: "config-data") pod "c7e7b425-a348-486a-8ea9-b1d315c8cc7f" (UID: "c7e7b425-a348-486a-8ea9-b1d315c8cc7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.336097 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e7b425-a348-486a-8ea9-b1d315c8cc7f" (UID: "c7e7b425-a348-486a-8ea9-b1d315c8cc7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.412590 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run-ovn\") pod \"2b067d94-ea75-4101-9b13-b4808b49d3a9\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.412660 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-scripts\") pod \"2b067d94-ea75-4101-9b13-b4808b49d3a9\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.412699 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5qdl\" (UniqueName: \"kubernetes.io/projected/2b067d94-ea75-4101-9b13-b4808b49d3a9-kube-api-access-s5qdl\") pod \"2b067d94-ea75-4101-9b13-b4808b49d3a9\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.412721 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run\") pod \"2b067d94-ea75-4101-9b13-b4808b49d3a9\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.412879 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-log-ovn\") pod \"2b067d94-ea75-4101-9b13-b4808b49d3a9\" (UID: \"2b067d94-ea75-4101-9b13-b4808b49d3a9\") " Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413118 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "2b067d94-ea75-4101-9b13-b4808b49d3a9" (UID: "2b067d94-ea75-4101-9b13-b4808b49d3a9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413201 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "2b067d94-ea75-4101-9b13-b4808b49d3a9" (UID: "2b067d94-ea75-4101-9b13-b4808b49d3a9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413520 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run" (OuterVolumeSpecName: "var-run") pod "2b067d94-ea75-4101-9b13-b4808b49d3a9" (UID: "2b067d94-ea75-4101-9b13-b4808b49d3a9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413555 4801 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413578 4801 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413589 4801 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413600 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413611 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413620 4801 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413633 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpbhf\" (UniqueName: \"kubernetes.io/projected/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-kube-api-access-hpbhf\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.413642 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7b425-a348-486a-8ea9-b1d315c8cc7f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.414135 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-scripts" (OuterVolumeSpecName: "scripts") pod "2b067d94-ea75-4101-9b13-b4808b49d3a9" (UID: "2b067d94-ea75-4101-9b13-b4808b49d3a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.417829 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b067d94-ea75-4101-9b13-b4808b49d3a9-kube-api-access-s5qdl" (OuterVolumeSpecName: "kube-api-access-s5qdl") pod "2b067d94-ea75-4101-9b13-b4808b49d3a9" (UID: "2b067d94-ea75-4101-9b13-b4808b49d3a9"). InnerVolumeSpecName "kube-api-access-s5qdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.515714 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b067d94-ea75-4101-9b13-b4808b49d3a9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.515800 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5qdl\" (UniqueName: \"kubernetes.io/projected/2b067d94-ea75-4101-9b13-b4808b49d3a9-kube-api-access-s5qdl\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.515816 4801 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2b067d94-ea75-4101-9b13-b4808b49d3a9-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.906288 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vqq8s" event={"ID":"c7e7b425-a348-486a-8ea9-b1d315c8cc7f","Type":"ContainerDied","Data":"53536c3b03d5f55e32023415af8e35a1c121039483e0c13d0e694a213ef4bb71"} Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.906872 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53536c3b03d5f55e32023415af8e35a1c121039483e0c13d0e694a213ef4bb71" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.906334 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vqq8s" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.908643 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-pl6f5" event={"ID":"2b067d94-ea75-4101-9b13-b4808b49d3a9","Type":"ContainerDied","Data":"deda536e8e1c314ecbd5ff767713ed7dc264dcadae32c332d7fb88a4121f8771"} Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.908738 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deda536e8e1c314ecbd5ff767713ed7dc264dcadae32c332d7fb88a4121f8771" Dec 06 03:27:02 crc kubenswrapper[4801]: I1206 03:27:02.908773 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-pl6f5" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.307531 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vqq8s"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.315915 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vqq8s"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.331400 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qqlb5-config-pl6f5"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.356408 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qqlb5-config-pl6f5"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.407813 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qxt5m"] Dec 06 03:27:03 crc kubenswrapper[4801]: E1206 03:27:03.408167 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b067d94-ea75-4101-9b13-b4808b49d3a9" containerName="ovn-config" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.408183 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b067d94-ea75-4101-9b13-b4808b49d3a9" containerName="ovn-config" Dec 06 03:27:03 crc kubenswrapper[4801]: E1206 03:27:03.408208 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe18c105-d8ef-4a41-bc06-e0af15f681c7" containerName="init" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.408215 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe18c105-d8ef-4a41-bc06-e0af15f681c7" containerName="init" Dec 06 03:27:03 crc kubenswrapper[4801]: E1206 03:27:03.408227 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e7b425-a348-486a-8ea9-b1d315c8cc7f" containerName="keystone-bootstrap" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.408233 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e7b425-a348-486a-8ea9-b1d315c8cc7f" containerName="keystone-bootstrap" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.408394 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e7b425-a348-486a-8ea9-b1d315c8cc7f" containerName="keystone-bootstrap" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.408416 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b067d94-ea75-4101-9b13-b4808b49d3a9" containerName="ovn-config" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.408428 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe18c105-d8ef-4a41-bc06-e0af15f681c7" containerName="init" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.409053 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.412788 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.412947 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jb48d" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.412993 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.413218 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.415877 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.430991 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qxt5m"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.431689 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-combined-ca-bundle\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.432028 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-fernet-keys\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.432103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-config-data\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.432190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-scripts\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.432252 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-credential-keys\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.432571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqcqh\" (UniqueName: \"kubernetes.io/projected/7f5b3256-9ed6-45e3-acec-a0b14d607802-kube-api-access-mqcqh\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.456618 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qqlb5-config-p725w"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.458543 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.462421 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.486276 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqlb5-config-p725w"] Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533658 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-scripts\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533707 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-combined-ca-bundle\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-fernet-keys\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533786 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533805 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-config-data\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533837 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-scripts\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.533857 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh56p\" (UniqueName: \"kubernetes.io/projected/d126df52-c802-4779-96fe-8367103e1dbe-kube-api-access-wh56p\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.534071 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-additional-scripts\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.534151 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-credential-keys\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.534263 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-log-ovn\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.534388 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqcqh\" (UniqueName: \"kubernetes.io/projected/7f5b3256-9ed6-45e3-acec-a0b14d607802-kube-api-access-mqcqh\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.534451 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run-ovn\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.540443 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-scripts\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.541372 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-fernet-keys\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.541954 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-config-data\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.545017 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-combined-ca-bundle\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.550189 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-credential-keys\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.566008 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqcqh\" (UniqueName: \"kubernetes.io/projected/7f5b3256-9ed6-45e3-acec-a0b14d607802-kube-api-access-mqcqh\") pod \"keystone-bootstrap-qxt5m\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.636427 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.636525 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh56p\" (UniqueName: \"kubernetes.io/projected/d126df52-c802-4779-96fe-8367103e1dbe-kube-api-access-wh56p\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.636558 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-additional-scripts\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.636627 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-log-ovn\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.636685 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run-ovn\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.636775 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-scripts\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.638282 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.638342 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run-ovn\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.638342 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-log-ovn\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.638565 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-additional-scripts\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.640762 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-scripts\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.657827 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh56p\" (UniqueName: \"kubernetes.io/projected/d126df52-c802-4779-96fe-8367103e1dbe-kube-api-access-wh56p\") pod \"ovn-controller-qqlb5-config-p725w\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.732570 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:03 crc kubenswrapper[4801]: I1206 03:27:03.779394 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:05 crc kubenswrapper[4801]: I1206 03:27:05.223708 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b067d94-ea75-4101-9b13-b4808b49d3a9" path="/var/lib/kubelet/pods/2b067d94-ea75-4101-9b13-b4808b49d3a9/volumes" Dec 06 03:27:05 crc kubenswrapper[4801]: I1206 03:27:05.224602 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e7b425-a348-486a-8ea9-b1d315c8cc7f" path="/var/lib/kubelet/pods/c7e7b425-a348-486a-8ea9-b1d315c8cc7f/volumes" Dec 06 03:27:05 crc kubenswrapper[4801]: E1206 03:27:05.458877 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 06 03:27:05 crc kubenswrapper[4801]: E1206 03:27:05.459082 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-snwpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-qwr7p_openstack(3842042e-a4c9-4f33-bda5-b11f58a69519): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:27:05 crc kubenswrapper[4801]: E1206 03:27:05.460386 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-qwr7p" podUID="3842042e-a4c9-4f33-bda5-b11f58a69519" Dec 06 03:27:05 crc kubenswrapper[4801]: E1206 03:27:05.961409 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-qwr7p" podUID="3842042e-a4c9-4f33-bda5-b11f58a69519" Dec 06 03:27:07 crc kubenswrapper[4801]: E1206 03:27:07.148620 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 06 03:27:07 crc kubenswrapper[4801]: E1206 03:27:07.149042 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55chd6hddh586h57dhf8hfh58fh7bh577h5f5h645h54h55dh64dhcfh5bdh547hd9h5f5h76h57fh667h5b4h575h58fh548h5f6h649h85h79h8bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(28d06e7d-a469-4050-9f2c-db9da8389c58): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:27:10 crc kubenswrapper[4801]: I1206 03:27:10.828868 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Dec 06 03:27:15 crc kubenswrapper[4801]: I1206 03:27:15.829714 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.553821 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.729931 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-sb\") pod \"9b0adc6a-9d6a-4226-b807-79f3d905925a\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.730307 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-config\") pod \"9b0adc6a-9d6a-4226-b807-79f3d905925a\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.730332 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km8sh\" (UniqueName: \"kubernetes.io/projected/9b0adc6a-9d6a-4226-b807-79f3d905925a-kube-api-access-km8sh\") pod \"9b0adc6a-9d6a-4226-b807-79f3d905925a\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.730417 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-dns-svc\") pod \"9b0adc6a-9d6a-4226-b807-79f3d905925a\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.730440 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-nb\") pod \"9b0adc6a-9d6a-4226-b807-79f3d905925a\" (UID: \"9b0adc6a-9d6a-4226-b807-79f3d905925a\") " Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.743329 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0adc6a-9d6a-4226-b807-79f3d905925a-kube-api-access-km8sh" (OuterVolumeSpecName: "kube-api-access-km8sh") pod "9b0adc6a-9d6a-4226-b807-79f3d905925a" (UID: "9b0adc6a-9d6a-4226-b807-79f3d905925a"). InnerVolumeSpecName "kube-api-access-km8sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.773215 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9b0adc6a-9d6a-4226-b807-79f3d905925a" (UID: "9b0adc6a-9d6a-4226-b807-79f3d905925a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.775428 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-config" (OuterVolumeSpecName: "config") pod "9b0adc6a-9d6a-4226-b807-79f3d905925a" (UID: "9b0adc6a-9d6a-4226-b807-79f3d905925a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.782484 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9b0adc6a-9d6a-4226-b807-79f3d905925a" (UID: "9b0adc6a-9d6a-4226-b807-79f3d905925a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.787607 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9b0adc6a-9d6a-4226-b807-79f3d905925a" (UID: "9b0adc6a-9d6a-4226-b807-79f3d905925a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.832149 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km8sh\" (UniqueName: \"kubernetes.io/projected/9b0adc6a-9d6a-4226-b807-79f3d905925a-kube-api-access-km8sh\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.832182 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.832195 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.832206 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:17 crc kubenswrapper[4801]: I1206 03:27:17.832217 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b0adc6a-9d6a-4226-b807-79f3d905925a-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:18 crc kubenswrapper[4801]: I1206 03:27:18.055512 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" event={"ID":"9b0adc6a-9d6a-4226-b807-79f3d905925a","Type":"ContainerDied","Data":"04ab610e0d9a226b1aa8999878c03ad5b7d62ff5ecc8708a4f601fe4cde626af"} Dec 06 03:27:18 crc kubenswrapper[4801]: I1206 03:27:18.055583 4801 scope.go:117] "RemoveContainer" containerID="6041169c19cf2bf18daf8d421196f1239abaf9ecca544873e57413d049746847" Dec 06 03:27:18 crc kubenswrapper[4801]: I1206 03:27:18.055581 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" Dec 06 03:27:18 crc kubenswrapper[4801]: I1206 03:27:18.093938 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bltz2"] Dec 06 03:27:18 crc kubenswrapper[4801]: I1206 03:27:18.101552 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-bltz2"] Dec 06 03:27:19 crc kubenswrapper[4801]: I1206 03:27:19.221609 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" path="/var/lib/kubelet/pods/9b0adc6a-9d6a-4226-b807-79f3d905925a/volumes" Dec 06 03:27:20 crc kubenswrapper[4801]: I1206 03:27:20.830994 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-bltz2" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Dec 06 03:27:29 crc kubenswrapper[4801]: I1206 03:27:29.031696 4801 scope.go:117] "RemoveContainer" containerID="a867bcc693f7f30778834124c97a6d22271f07934e704abe63267f90b7d740e2" Dec 06 03:27:29 crc kubenswrapper[4801]: I1206 03:27:29.356555 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qqlb5-config-p725w"] Dec 06 03:27:29 crc kubenswrapper[4801]: I1206 03:27:29.494175 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qxt5m"] Dec 06 03:27:34 crc kubenswrapper[4801]: E1206 03:27:34.239528 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 06 03:27:34 crc kubenswrapper[4801]: E1206 03:27:34.240438 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-px9vw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7s22l_openstack(e8a2ead4-9b5d-465c-9b4a-5c7377ad246f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:27:34 crc kubenswrapper[4801]: E1206 03:27:34.241611 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7s22l" podUID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" Dec 06 03:27:35 crc kubenswrapper[4801]: I1206 03:27:35.234734 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qxt5m" event={"ID":"7f5b3256-9ed6-45e3-acec-a0b14d607802","Type":"ContainerStarted","Data":"589265b3ee1ee2cf33c798c8ed3af39a56fd3ca9f00358da8ea33a76d020e340"} Dec 06 03:27:35 crc kubenswrapper[4801]: I1206 03:27:35.236679 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-p725w" event={"ID":"d126df52-c802-4779-96fe-8367103e1dbe","Type":"ContainerStarted","Data":"41e4524fba65fc8c7b0ee5ee7c1ff0440fbfc352a6253c114690d9e5eaf93a6e"} Dec 06 03:27:35 crc kubenswrapper[4801]: E1206 03:27:35.237600 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7s22l" podUID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.278798 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qwr7p" event={"ID":"3842042e-a4c9-4f33-bda5-b11f58a69519","Type":"ContainerStarted","Data":"0ba08bc914c6ef000e31bb724672892b9bdc1aabc93e340f4fa7166a7b5bef1b"} Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.281111 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-p725w" event={"ID":"d126df52-c802-4779-96fe-8367103e1dbe","Type":"ContainerStarted","Data":"281b72f0abffde05cad5b84de79336d1666f87697c96f682c162076ab3b68e2c"} Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.283182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qxt5m" event={"ID":"7f5b3256-9ed6-45e3-acec-a0b14d607802","Type":"ContainerStarted","Data":"9b1c57f8e8d7af3e0cf5be0fa3d46c0f986bf9d98ac1ff012c7fb21aaa08899b"} Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.286238 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d06e7d-a469-4050-9f2c-db9da8389c58","Type":"ContainerStarted","Data":"e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7"} Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.287682 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jbzbp" event={"ID":"eed8b95a-e314-4ab9-91f4-06df2649e614","Type":"ContainerStarted","Data":"458c35b58d2d00b22609a0f92344c2fb33e07af8a6ce87cce3ec13e2d2d2bb76"} Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.300769 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-qwr7p" podStartSLOduration=3.4637623619999998 podStartE2EDuration="1m4.300738702s" podCreationTimestamp="2025-12-06 03:26:34 +0000 UTC" firstStartedPulling="2025-12-06 03:26:35.304253502 +0000 UTC m=+1248.426861074" lastFinishedPulling="2025-12-06 03:27:36.141229842 +0000 UTC m=+1309.263837414" observedRunningTime="2025-12-06 03:27:38.296667813 +0000 UTC m=+1311.419275385" watchObservedRunningTime="2025-12-06 03:27:38.300738702 +0000 UTC m=+1311.423346274" Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.331807 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qqlb5-config-p725w" podStartSLOduration=35.331789812 podStartE2EDuration="35.331789812s" podCreationTimestamp="2025-12-06 03:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:27:38.327658042 +0000 UTC m=+1311.450265614" watchObservedRunningTime="2025-12-06 03:27:38.331789812 +0000 UTC m=+1311.454397374" Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.332018 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qxt5m" podStartSLOduration=35.332012608 podStartE2EDuration="35.332012608s" podCreationTimestamp="2025-12-06 03:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:27:38.314374806 +0000 UTC m=+1311.436982388" watchObservedRunningTime="2025-12-06 03:27:38.332012608 +0000 UTC m=+1311.454620180" Dec 06 03:27:38 crc kubenswrapper[4801]: I1206 03:27:38.350210 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jbzbp" podStartSLOduration=10.787286703 podStartE2EDuration="1m4.350189794s" podCreationTimestamp="2025-12-06 03:26:34 +0000 UTC" firstStartedPulling="2025-12-06 03:26:35.414495941 +0000 UTC m=+1248.537103513" lastFinishedPulling="2025-12-06 03:27:28.977399032 +0000 UTC m=+1302.100006604" observedRunningTime="2025-12-06 03:27:38.346611038 +0000 UTC m=+1311.469218610" watchObservedRunningTime="2025-12-06 03:27:38.350189794 +0000 UTC m=+1311.472797366" Dec 06 03:27:39 crc kubenswrapper[4801]: I1206 03:27:39.309816 4801 generic.go:334] "Generic (PLEG): container finished" podID="d126df52-c802-4779-96fe-8367103e1dbe" containerID="281b72f0abffde05cad5b84de79336d1666f87697c96f682c162076ab3b68e2c" exitCode=0 Dec 06 03:27:39 crc kubenswrapper[4801]: I1206 03:27:39.310977 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-p725w" event={"ID":"d126df52-c802-4779-96fe-8367103e1dbe","Type":"ContainerDied","Data":"281b72f0abffde05cad5b84de79336d1666f87697c96f682c162076ab3b68e2c"} Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.639809 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759197 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run\") pod \"d126df52-c802-4779-96fe-8367103e1dbe\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759291 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run" (OuterVolumeSpecName: "var-run") pod "d126df52-c802-4779-96fe-8367103e1dbe" (UID: "d126df52-c802-4779-96fe-8367103e1dbe"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759329 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run-ovn\") pod \"d126df52-c802-4779-96fe-8367103e1dbe\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759407 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d126df52-c802-4779-96fe-8367103e1dbe" (UID: "d126df52-c802-4779-96fe-8367103e1dbe"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759438 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh56p\" (UniqueName: \"kubernetes.io/projected/d126df52-c802-4779-96fe-8367103e1dbe-kube-api-access-wh56p\") pod \"d126df52-c802-4779-96fe-8367103e1dbe\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759472 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-additional-scripts\") pod \"d126df52-c802-4779-96fe-8367103e1dbe\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759487 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-log-ovn\") pod \"d126df52-c802-4779-96fe-8367103e1dbe\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759518 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-scripts\") pod \"d126df52-c802-4779-96fe-8367103e1dbe\" (UID: \"d126df52-c802-4779-96fe-8367103e1dbe\") " Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.759652 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d126df52-c802-4779-96fe-8367103e1dbe" (UID: "d126df52-c802-4779-96fe-8367103e1dbe"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.760179 4801 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.760199 4801 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.760211 4801 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d126df52-c802-4779-96fe-8367103e1dbe-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.760358 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d126df52-c802-4779-96fe-8367103e1dbe" (UID: "d126df52-c802-4779-96fe-8367103e1dbe"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.760519 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-scripts" (OuterVolumeSpecName: "scripts") pod "d126df52-c802-4779-96fe-8367103e1dbe" (UID: "d126df52-c802-4779-96fe-8367103e1dbe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.769463 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d126df52-c802-4779-96fe-8367103e1dbe-kube-api-access-wh56p" (OuterVolumeSpecName: "kube-api-access-wh56p") pod "d126df52-c802-4779-96fe-8367103e1dbe" (UID: "d126df52-c802-4779-96fe-8367103e1dbe"). InnerVolumeSpecName "kube-api-access-wh56p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.861897 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh56p\" (UniqueName: \"kubernetes.io/projected/d126df52-c802-4779-96fe-8367103e1dbe-kube-api-access-wh56p\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.862269 4801 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:40 crc kubenswrapper[4801]: I1206 03:27:40.862281 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d126df52-c802-4779-96fe-8367103e1dbe-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:41 crc kubenswrapper[4801]: I1206 03:27:41.331373 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qqlb5-config-p725w" event={"ID":"d126df52-c802-4779-96fe-8367103e1dbe","Type":"ContainerDied","Data":"41e4524fba65fc8c7b0ee5ee7c1ff0440fbfc352a6253c114690d9e5eaf93a6e"} Dec 06 03:27:41 crc kubenswrapper[4801]: I1206 03:27:41.331411 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41e4524fba65fc8c7b0ee5ee7c1ff0440fbfc352a6253c114690d9e5eaf93a6e" Dec 06 03:27:41 crc kubenswrapper[4801]: I1206 03:27:41.331433 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qqlb5-config-p725w" Dec 06 03:27:41 crc kubenswrapper[4801]: I1206 03:27:41.739821 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qqlb5-config-p725w"] Dec 06 03:27:41 crc kubenswrapper[4801]: I1206 03:27:41.748507 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qqlb5-config-p725w"] Dec 06 03:27:42 crc kubenswrapper[4801]: I1206 03:27:42.342048 4801 generic.go:334] "Generic (PLEG): container finished" podID="7f5b3256-9ed6-45e3-acec-a0b14d607802" containerID="9b1c57f8e8d7af3e0cf5be0fa3d46c0f986bf9d98ac1ff012c7fb21aaa08899b" exitCode=0 Dec 06 03:27:42 crc kubenswrapper[4801]: I1206 03:27:42.342338 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qxt5m" event={"ID":"7f5b3256-9ed6-45e3-acec-a0b14d607802","Type":"ContainerDied","Data":"9b1c57f8e8d7af3e0cf5be0fa3d46c0f986bf9d98ac1ff012c7fb21aaa08899b"} Dec 06 03:27:43 crc kubenswrapper[4801]: I1206 03:27:43.224357 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d126df52-c802-4779-96fe-8367103e1dbe" path="/var/lib/kubelet/pods/d126df52-c802-4779-96fe-8367103e1dbe/volumes" Dec 06 03:27:43 crc kubenswrapper[4801]: I1206 03:27:43.351050 4801 generic.go:334] "Generic (PLEG): container finished" podID="eed8b95a-e314-4ab9-91f4-06df2649e614" containerID="458c35b58d2d00b22609a0f92344c2fb33e07af8a6ce87cce3ec13e2d2d2bb76" exitCode=0 Dec 06 03:27:43 crc kubenswrapper[4801]: I1206 03:27:43.351247 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jbzbp" event={"ID":"eed8b95a-e314-4ab9-91f4-06df2649e614","Type":"ContainerDied","Data":"458c35b58d2d00b22609a0f92344c2fb33e07af8a6ce87cce3ec13e2d2d2bb76"} Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.301042 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.367403 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qxt5m" event={"ID":"7f5b3256-9ed6-45e3-acec-a0b14d607802","Type":"ContainerDied","Data":"589265b3ee1ee2cf33c798c8ed3af39a56fd3ca9f00358da8ea33a76d020e340"} Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.367724 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="589265b3ee1ee2cf33c798c8ed3af39a56fd3ca9f00358da8ea33a76d020e340" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.367532 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qxt5m" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.382420 4801 generic.go:334] "Generic (PLEG): container finished" podID="861fdd2b-c39c-4122-94a2-8eb5744c1536" containerID="096ceb38bfb155296281785c717f33b9461f7bd30de59f0b078cb14d3fac60d6" exitCode=0 Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.382489 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tp4d2" event={"ID":"861fdd2b-c39c-4122-94a2-8eb5744c1536","Type":"ContainerDied","Data":"096ceb38bfb155296281785c717f33b9461f7bd30de59f0b078cb14d3fac60d6"} Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.388057 4801 generic.go:334] "Generic (PLEG): container finished" podID="3842042e-a4c9-4f33-bda5-b11f58a69519" containerID="0ba08bc914c6ef000e31bb724672892b9bdc1aabc93e340f4fa7166a7b5bef1b" exitCode=0 Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.388127 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qwr7p" event={"ID":"3842042e-a4c9-4f33-bda5-b11f58a69519","Type":"ContainerDied","Data":"0ba08bc914c6ef000e31bb724672892b9bdc1aabc93e340f4fa7166a7b5bef1b"} Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.424515 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-scripts\") pod \"7f5b3256-9ed6-45e3-acec-a0b14d607802\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.424568 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqcqh\" (UniqueName: \"kubernetes.io/projected/7f5b3256-9ed6-45e3-acec-a0b14d607802-kube-api-access-mqcqh\") pod \"7f5b3256-9ed6-45e3-acec-a0b14d607802\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.424662 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-combined-ca-bundle\") pod \"7f5b3256-9ed6-45e3-acec-a0b14d607802\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.424728 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-config-data\") pod \"7f5b3256-9ed6-45e3-acec-a0b14d607802\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.424779 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-fernet-keys\") pod \"7f5b3256-9ed6-45e3-acec-a0b14d607802\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.424870 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-credential-keys\") pod \"7f5b3256-9ed6-45e3-acec-a0b14d607802\" (UID: \"7f5b3256-9ed6-45e3-acec-a0b14d607802\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.431405 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7f5b3256-9ed6-45e3-acec-a0b14d607802" (UID: "7f5b3256-9ed6-45e3-acec-a0b14d607802"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.432329 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7f5b3256-9ed6-45e3-acec-a0b14d607802" (UID: "7f5b3256-9ed6-45e3-acec-a0b14d607802"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.432592 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-scripts" (OuterVolumeSpecName: "scripts") pod "7f5b3256-9ed6-45e3-acec-a0b14d607802" (UID: "7f5b3256-9ed6-45e3-acec-a0b14d607802"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.438606 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5b3256-9ed6-45e3-acec-a0b14d607802-kube-api-access-mqcqh" (OuterVolumeSpecName: "kube-api-access-mqcqh") pod "7f5b3256-9ed6-45e3-acec-a0b14d607802" (UID: "7f5b3256-9ed6-45e3-acec-a0b14d607802"). InnerVolumeSpecName "kube-api-access-mqcqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.468210 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-config-data" (OuterVolumeSpecName: "config-data") pod "7f5b3256-9ed6-45e3-acec-a0b14d607802" (UID: "7f5b3256-9ed6-45e3-acec-a0b14d607802"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.478440 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-66f8fdb7b9-xsvqm"] Dec 06 03:27:44 crc kubenswrapper[4801]: E1206 03:27:44.482870 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="init" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483043 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="init" Dec 06 03:27:44 crc kubenswrapper[4801]: E1206 03:27:44.483060 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d126df52-c802-4779-96fe-8367103e1dbe" containerName="ovn-config" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483068 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d126df52-c802-4779-96fe-8367103e1dbe" containerName="ovn-config" Dec 06 03:27:44 crc kubenswrapper[4801]: E1206 03:27:44.483089 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483097 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" Dec 06 03:27:44 crc kubenswrapper[4801]: E1206 03:27:44.483142 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5b3256-9ed6-45e3-acec-a0b14d607802" containerName="keystone-bootstrap" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483181 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5b3256-9ed6-45e3-acec-a0b14d607802" containerName="keystone-bootstrap" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483523 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0adc6a-9d6a-4226-b807-79f3d905925a" containerName="dnsmasq-dns" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483554 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5b3256-9ed6-45e3-acec-a0b14d607802" containerName="keystone-bootstrap" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.483570 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d126df52-c802-4779-96fe-8367103e1dbe" containerName="ovn-config" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.484414 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.493736 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66f8fdb7b9-xsvqm"] Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.499812 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.500464 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.506896 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f5b3256-9ed6-45e3-acec-a0b14d607802" (UID: "7f5b3256-9ed6-45e3-acec-a0b14d607802"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.526996 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.527024 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqcqh\" (UniqueName: \"kubernetes.io/projected/7f5b3256-9ed6-45e3-acec-a0b14d607802-kube-api-access-mqcqh\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.527034 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.527044 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.527052 4801 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.527060 4801 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7f5b3256-9ed6-45e3-acec-a0b14d607802-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628095 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qd6\" (UniqueName: \"kubernetes.io/projected/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-kube-api-access-85qd6\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-public-tls-certs\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628164 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-config-data\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628192 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-internal-tls-certs\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628223 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-fernet-keys\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628248 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-credential-keys\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-combined-ca-bundle\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.628315 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-scripts\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.706552 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jbzbp" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730066 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-scripts\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730521 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qd6\" (UniqueName: \"kubernetes.io/projected/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-kube-api-access-85qd6\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730556 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-public-tls-certs\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730587 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-config-data\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730614 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-internal-tls-certs\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730650 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-fernet-keys\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730684 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-credential-keys\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.730711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-combined-ca-bundle\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.735638 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-config-data\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.735797 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-public-tls-certs\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.736160 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-fernet-keys\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.745547 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-credential-keys\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.745725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-combined-ca-bundle\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.746034 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-scripts\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.748439 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-internal-tls-certs\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.750681 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qd6\" (UniqueName: \"kubernetes.io/projected/09d46a1d-755b-43d4-81f5-3a3be44ea3d4-kube-api-access-85qd6\") pod \"keystone-66f8fdb7b9-xsvqm\" (UID: \"09d46a1d-755b-43d4-81f5-3a3be44ea3d4\") " pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.819151 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.835304 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-scripts\") pod \"eed8b95a-e314-4ab9-91f4-06df2649e614\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.835409 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-config-data\") pod \"eed8b95a-e314-4ab9-91f4-06df2649e614\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.835442 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed8b95a-e314-4ab9-91f4-06df2649e614-logs\") pod \"eed8b95a-e314-4ab9-91f4-06df2649e614\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.836064 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eed8b95a-e314-4ab9-91f4-06df2649e614-logs" (OuterVolumeSpecName: "logs") pod "eed8b95a-e314-4ab9-91f4-06df2649e614" (UID: "eed8b95a-e314-4ab9-91f4-06df2649e614"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.836200 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpstw\" (UniqueName: \"kubernetes.io/projected/eed8b95a-e314-4ab9-91f4-06df2649e614-kube-api-access-lpstw\") pod \"eed8b95a-e314-4ab9-91f4-06df2649e614\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.836234 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-combined-ca-bundle\") pod \"eed8b95a-e314-4ab9-91f4-06df2649e614\" (UID: \"eed8b95a-e314-4ab9-91f4-06df2649e614\") " Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.837212 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eed8b95a-e314-4ab9-91f4-06df2649e614-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.845868 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-scripts" (OuterVolumeSpecName: "scripts") pod "eed8b95a-e314-4ab9-91f4-06df2649e614" (UID: "eed8b95a-e314-4ab9-91f4-06df2649e614"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.845867 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eed8b95a-e314-4ab9-91f4-06df2649e614-kube-api-access-lpstw" (OuterVolumeSpecName: "kube-api-access-lpstw") pod "eed8b95a-e314-4ab9-91f4-06df2649e614" (UID: "eed8b95a-e314-4ab9-91f4-06df2649e614"). InnerVolumeSpecName "kube-api-access-lpstw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.864733 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eed8b95a-e314-4ab9-91f4-06df2649e614" (UID: "eed8b95a-e314-4ab9-91f4-06df2649e614"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.891896 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-config-data" (OuterVolumeSpecName: "config-data") pod "eed8b95a-e314-4ab9-91f4-06df2649e614" (UID: "eed8b95a-e314-4ab9-91f4-06df2649e614"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.940206 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpstw\" (UniqueName: \"kubernetes.io/projected/eed8b95a-e314-4ab9-91f4-06df2649e614-kube-api-access-lpstw\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.940235 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.940247 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:44 crc kubenswrapper[4801]: I1206 03:27:44.940259 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eed8b95a-e314-4ab9-91f4-06df2649e614-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.202532 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-66f8fdb7b9-xsvqm"] Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.405397 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jbzbp" event={"ID":"eed8b95a-e314-4ab9-91f4-06df2649e614","Type":"ContainerDied","Data":"11515578d236b19b4e15ed46199f337caf03884a2d1ed4bb8383858399831e94"} Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.405441 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11515578d236b19b4e15ed46199f337caf03884a2d1ed4bb8383858399831e94" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.405501 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jbzbp" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.408630 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66f8fdb7b9-xsvqm" event={"ID":"09d46a1d-755b-43d4-81f5-3a3be44ea3d4","Type":"ContainerStarted","Data":"0bd4b1e773839dd41cb5c3299bfe4967a365e482328f2fcf2b409cf20a3f9f17"} Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.411853 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d06e7d-a469-4050-9f2c-db9da8389c58","Type":"ContainerStarted","Data":"f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac"} Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.531174 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55dddf74fb-zbzw5"] Dec 06 03:27:45 crc kubenswrapper[4801]: E1206 03:27:45.531562 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eed8b95a-e314-4ab9-91f4-06df2649e614" containerName="placement-db-sync" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.531579 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="eed8b95a-e314-4ab9-91f4-06df2649e614" containerName="placement-db-sync" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.531794 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="eed8b95a-e314-4ab9-91f4-06df2649e614" containerName="placement-db-sync" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.532653 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.536348 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.539211 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.539655 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.539820 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.539824 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-g9xd9" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.543740 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55dddf74fb-zbzw5"] Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665043 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-scripts\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665423 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-config-data\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665487 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0a4a82-0f66-4716-9b11-fc2015676f79-logs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665502 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-combined-ca-bundle\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665523 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c579\" (UniqueName: \"kubernetes.io/projected/6f0a4a82-0f66-4716-9b11-fc2015676f79-kube-api-access-2c579\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665548 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-public-tls-certs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.665567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-internal-tls-certs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767511 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-config-data\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767615 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0a4a82-0f66-4716-9b11-fc2015676f79-logs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767641 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-combined-ca-bundle\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767662 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c579\" (UniqueName: \"kubernetes.io/projected/6f0a4a82-0f66-4716-9b11-fc2015676f79-kube-api-access-2c579\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767694 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-public-tls-certs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767723 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-internal-tls-certs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.767812 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-scripts\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.768504 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0a4a82-0f66-4716-9b11-fc2015676f79-logs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.773516 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-public-tls-certs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.773831 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-combined-ca-bundle\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.774214 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-scripts\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.774576 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-config-data\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.775660 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0a4a82-0f66-4716-9b11-fc2015676f79-internal-tls-certs\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.784682 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:27:45 crc kubenswrapper[4801]: I1206 03:27:45.788092 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c579\" (UniqueName: \"kubernetes.io/projected/6f0a4a82-0f66-4716-9b11-fc2015676f79-kube-api-access-2c579\") pod \"placement-55dddf74fb-zbzw5\" (UID: \"6f0a4a82-0f66-4716-9b11-fc2015676f79\") " pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.857545 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.869263 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snwpt\" (UniqueName: \"kubernetes.io/projected/3842042e-a4c9-4f33-bda5-b11f58a69519-kube-api-access-snwpt\") pod \"3842042e-a4c9-4f33-bda5-b11f58a69519\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.869493 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-db-sync-config-data\") pod \"3842042e-a4c9-4f33-bda5-b11f58a69519\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.869537 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-combined-ca-bundle\") pod \"3842042e-a4c9-4f33-bda5-b11f58a69519\" (UID: \"3842042e-a4c9-4f33-bda5-b11f58a69519\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.874419 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3842042e-a4c9-4f33-bda5-b11f58a69519" (UID: "3842042e-a4c9-4f33-bda5-b11f58a69519"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.874618 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3842042e-a4c9-4f33-bda5-b11f58a69519-kube-api-access-snwpt" (OuterVolumeSpecName: "kube-api-access-snwpt") pod "3842042e-a4c9-4f33-bda5-b11f58a69519" (UID: "3842042e-a4c9-4f33-bda5-b11f58a69519"). InnerVolumeSpecName "kube-api-access-snwpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.907278 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3842042e-a4c9-4f33-bda5-b11f58a69519" (UID: "3842042e-a4c9-4f33-bda5-b11f58a69519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.971222 4801 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.971539 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3842042e-a4c9-4f33-bda5-b11f58a69519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:45.971548 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snwpt\" (UniqueName: \"kubernetes.io/projected/3842042e-a4c9-4f33-bda5-b11f58a69519-kube-api-access-snwpt\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.001057 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tp4d2" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.073856 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lj95\" (UniqueName: \"kubernetes.io/projected/861fdd2b-c39c-4122-94a2-8eb5744c1536-kube-api-access-4lj95\") pod \"861fdd2b-c39c-4122-94a2-8eb5744c1536\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.074023 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-db-sync-config-data\") pod \"861fdd2b-c39c-4122-94a2-8eb5744c1536\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.074124 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-combined-ca-bundle\") pod \"861fdd2b-c39c-4122-94a2-8eb5744c1536\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.074247 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-config-data\") pod \"861fdd2b-c39c-4122-94a2-8eb5744c1536\" (UID: \"861fdd2b-c39c-4122-94a2-8eb5744c1536\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.078378 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861fdd2b-c39c-4122-94a2-8eb5744c1536-kube-api-access-4lj95" (OuterVolumeSpecName: "kube-api-access-4lj95") pod "861fdd2b-c39c-4122-94a2-8eb5744c1536" (UID: "861fdd2b-c39c-4122-94a2-8eb5744c1536"). InnerVolumeSpecName "kube-api-access-4lj95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.082599 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "861fdd2b-c39c-4122-94a2-8eb5744c1536" (UID: "861fdd2b-c39c-4122-94a2-8eb5744c1536"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.102211 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "861fdd2b-c39c-4122-94a2-8eb5744c1536" (UID: "861fdd2b-c39c-4122-94a2-8eb5744c1536"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.119035 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-config-data" (OuterVolumeSpecName: "config-data") pod "861fdd2b-c39c-4122-94a2-8eb5744c1536" (UID: "861fdd2b-c39c-4122-94a2-8eb5744c1536"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.176227 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.176289 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lj95\" (UniqueName: \"kubernetes.io/projected/861fdd2b-c39c-4122-94a2-8eb5744c1536-kube-api-access-4lj95\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.176299 4801 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.176310 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/861fdd2b-c39c-4122-94a2-8eb5744c1536-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.429669 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tp4d2" event={"ID":"861fdd2b-c39c-4122-94a2-8eb5744c1536","Type":"ContainerDied","Data":"bd00d19b892bcd43708cc527abe0ed1974dca8407cd9550cb256c868ae5aaf3a"} Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.429708 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd00d19b892bcd43708cc527abe0ed1974dca8407cd9550cb256c868ae5aaf3a" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.429808 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tp4d2" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.447895 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-66f8fdb7b9-xsvqm" event={"ID":"09d46a1d-755b-43d4-81f5-3a3be44ea3d4","Type":"ContainerStarted","Data":"cf4ac416c7930ee48127d5b5773561690e06bf41985e18804435eec67b584dc9"} Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.448998 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.456569 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-qwr7p" event={"ID":"3842042e-a4c9-4f33-bda5-b11f58a69519","Type":"ContainerDied","Data":"b3e28f041c569a2453003f21f6d0ddc6b597be2de40f1cd3f0a4c7e52029dd76"} Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.456609 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e28f041c569a2453003f21f6d0ddc6b597be2de40f1cd3f0a4c7e52029dd76" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.456640 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-qwr7p" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.488216 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-66f8fdb7b9-xsvqm" podStartSLOduration=2.488191107 podStartE2EDuration="2.488191107s" podCreationTimestamp="2025-12-06 03:27:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:27:46.475270891 +0000 UTC m=+1319.597878463" watchObservedRunningTime="2025-12-06 03:27:46.488191107 +0000 UTC m=+1319.610798679" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.774514 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7bc55fb7dc-pm7jf"] Dec 06 03:27:47 crc kubenswrapper[4801]: E1206 03:27:46.774970 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="861fdd2b-c39c-4122-94a2-8eb5744c1536" containerName="glance-db-sync" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.774985 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="861fdd2b-c39c-4122-94a2-8eb5744c1536" containerName="glance-db-sync" Dec 06 03:27:47 crc kubenswrapper[4801]: E1206 03:27:46.775006 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3842042e-a4c9-4f33-bda5-b11f58a69519" containerName="barbican-db-sync" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.775012 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3842042e-a4c9-4f33-bda5-b11f58a69519" containerName="barbican-db-sync" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.775177 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="861fdd2b-c39c-4122-94a2-8eb5744c1536" containerName="glance-db-sync" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.775194 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3842042e-a4c9-4f33-bda5-b11f58a69519" containerName="barbican-db-sync" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.776006 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.781056 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.781621 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.783258 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-wxs88" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.791348 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bc55fb7dc-pm7jf"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.855290 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-664fff78fd-lzlf4"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.857332 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.865696 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.881958 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-664fff78fd-lzlf4"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.896843 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-combined-ca-bundle\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.896917 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5k7\" (UniqueName: \"kubernetes.io/projected/3aa76c70-d21a-495e-8599-9ca195e8fe53-kube-api-access-fg5k7\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.896953 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa76c70-d21a-495e-8599-9ca195e8fe53-logs\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.897013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-config-data-custom\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.897046 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-config-data\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.944661 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-n2wf9"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.947175 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.988360 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-n2wf9"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.998948 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-config-data\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.998987 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65827ef2-44cd-4f16-82a9-9b746243a301-logs\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-combined-ca-bundle\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999070 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-combined-ca-bundle\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999096 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zjmr\" (UniqueName: \"kubernetes.io/projected/65827ef2-44cd-4f16-82a9-9b746243a301-kube-api-access-6zjmr\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999118 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5k7\" (UniqueName: \"kubernetes.io/projected/3aa76c70-d21a-495e-8599-9ca195e8fe53-kube-api-access-fg5k7\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999141 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa76c70-d21a-495e-8599-9ca195e8fe53-logs\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999165 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-config-data-custom\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999196 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-config-data-custom\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:46.999220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-config-data\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.001933 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3aa76c70-d21a-495e-8599-9ca195e8fe53-logs\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.011582 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-combined-ca-bundle\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.011660 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b46c4b9d6-q75vm"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.012717 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-config-data-custom\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.013176 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.090040 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3aa76c70-d21a-495e-8599-9ca195e8fe53-config-data\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.090589 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.102625 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5k7\" (UniqueName: \"kubernetes.io/projected/3aa76c70-d21a-495e-8599-9ca195e8fe53-kube-api-access-fg5k7\") pod \"barbican-worker-7bc55fb7dc-pm7jf\" (UID: \"3aa76c70-d21a-495e-8599-9ca195e8fe53\") " pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.108514 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-n2wf9"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111460 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-nb\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-config-data\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111586 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65827ef2-44cd-4f16-82a9-9b746243a301-logs\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111697 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-dns-svc\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111744 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-combined-ca-bundle\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111790 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-sb\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111835 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bn9\" (UniqueName: \"kubernetes.io/projected/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-kube-api-access-n9bn9\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111868 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zjmr\" (UniqueName: \"kubernetes.io/projected/65827ef2-44cd-4f16-82a9-9b746243a301-kube-api-access-6zjmr\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111891 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data-custom\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111926 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.111954 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-config-data-custom\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.112882 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-config\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.112971 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-combined-ca-bundle\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.113028 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48c6\" (UniqueName: \"kubernetes.io/projected/61381354-8112-4fd4-bd1b-aa877af9f850-kube-api-access-r48c6\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.113057 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-logs\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.115082 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/65827ef2-44cd-4f16-82a9-9b746243a301-logs\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.116613 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-config-data-custom\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: E1206 03:27:47.117276 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-r48c6 ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" podUID="61381354-8112-4fd4-bd1b-aa877af9f850" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.117387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-config-data\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.120644 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.125207 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65827ef2-44cd-4f16-82a9-9b746243a301-combined-ca-bundle\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.130996 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b46c4b9d6-q75vm"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.168068 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zjmr\" (UniqueName: \"kubernetes.io/projected/65827ef2-44cd-4f16-82a9-9b746243a301-kube-api-access-6zjmr\") pod \"barbican-keystone-listener-664fff78fd-lzlf4\" (UID: \"65827ef2-44cd-4f16-82a9-9b746243a301\") " pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.170682 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699df9757c-d4ndf"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.176289 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.200096 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.217717 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh4dv\" (UniqueName: \"kubernetes.io/projected/065ef35f-50b6-4eb5-b46c-961b40e0e29f-kube-api-access-vh4dv\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.217819 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-dns-svc\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.217869 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-sb\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.217912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9bn9\" (UniqueName: \"kubernetes.io/projected/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-kube-api-access-n9bn9\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.217939 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-dns-svc\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.217968 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data-custom\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.218003 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.218027 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.218052 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-config\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.222745 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-config\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.223357 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-sb\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.223531 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-dns-svc\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.226073 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.228685 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-config\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.234228 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-combined-ca-bundle\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.234344 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r48c6\" (UniqueName: \"kubernetes.io/projected/61381354-8112-4fd4-bd1b-aa877af9f850-kube-api-access-r48c6\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.234446 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.234551 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-logs\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.234721 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-nb\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.235592 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-nb\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.236181 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-logs\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.253067 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-combined-ca-bundle\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.256052 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.273343 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data-custom\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.290574 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9bn9\" (UniqueName: \"kubernetes.io/projected/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-kube-api-access-n9bn9\") pod \"barbican-api-5b46c4b9d6-q75vm\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.341416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.341493 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-config\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.341536 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.341655 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh4dv\" (UniqueName: \"kubernetes.io/projected/065ef35f-50b6-4eb5-b46c-961b40e0e29f-kube-api-access-vh4dv\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.341798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-dns-svc\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.343723 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-d4ndf"] Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.343969 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-config\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.344280 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r48c6\" (UniqueName: \"kubernetes.io/projected/61381354-8112-4fd4-bd1b-aa877af9f850-kube-api-access-r48c6\") pod \"dnsmasq-dns-7c55bf9497-n2wf9\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.358904 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.365319 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.375977 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-dns-svc\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.438262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh4dv\" (UniqueName: \"kubernetes.io/projected/065ef35f-50b6-4eb5-b46c-961b40e0e29f-kube-api-access-vh4dv\") pod \"dnsmasq-dns-699df9757c-d4ndf\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.466966 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.486004 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.538326 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.545261 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.648377 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-sb\") pod \"61381354-8112-4fd4-bd1b-aa877af9f850\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.648786 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-nb\") pod \"61381354-8112-4fd4-bd1b-aa877af9f850\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.648922 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-dns-svc\") pod \"61381354-8112-4fd4-bd1b-aa877af9f850\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.648947 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-config\") pod \"61381354-8112-4fd4-bd1b-aa877af9f850\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.649036 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r48c6\" (UniqueName: \"kubernetes.io/projected/61381354-8112-4fd4-bd1b-aa877af9f850-kube-api-access-r48c6\") pod \"61381354-8112-4fd4-bd1b-aa877af9f850\" (UID: \"61381354-8112-4fd4-bd1b-aa877af9f850\") " Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.649459 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "61381354-8112-4fd4-bd1b-aa877af9f850" (UID: "61381354-8112-4fd4-bd1b-aa877af9f850"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.649492 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "61381354-8112-4fd4-bd1b-aa877af9f850" (UID: "61381354-8112-4fd4-bd1b-aa877af9f850"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.649911 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "61381354-8112-4fd4-bd1b-aa877af9f850" (UID: "61381354-8112-4fd4-bd1b-aa877af9f850"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.650208 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-config" (OuterVolumeSpecName: "config") pod "61381354-8112-4fd4-bd1b-aa877af9f850" (UID: "61381354-8112-4fd4-bd1b-aa877af9f850"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.662047 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61381354-8112-4fd4-bd1b-aa877af9f850-kube-api-access-r48c6" (OuterVolumeSpecName: "kube-api-access-r48c6") pod "61381354-8112-4fd4-bd1b-aa877af9f850" (UID: "61381354-8112-4fd4-bd1b-aa877af9f850"). InnerVolumeSpecName "kube-api-access-r48c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.751878 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.751910 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.751920 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.751930 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r48c6\" (UniqueName: \"kubernetes.io/projected/61381354-8112-4fd4-bd1b-aa877af9f850-kube-api-access-r48c6\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.751940 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/61381354-8112-4fd4-bd1b-aa877af9f850-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:27:47 crc kubenswrapper[4801]: I1206 03:27:47.852523 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55dddf74fb-zbzw5"] Dec 06 03:27:47 crc kubenswrapper[4801]: W1206 03:27:47.866124 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f0a4a82_0f66_4716_9b11_fc2015676f79.slice/crio-488e5ce782f0d917b500572e4d30567a40c8fdda6b634eb693574851a1e671b1 WatchSource:0}: Error finding container 488e5ce782f0d917b500572e4d30567a40c8fdda6b634eb693574851a1e671b1: Status 404 returned error can't find the container with id 488e5ce782f0d917b500572e4d30567a40c8fdda6b634eb693574851a1e671b1 Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.068436 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7bc55fb7dc-pm7jf"] Dec 06 03:27:48 crc kubenswrapper[4801]: W1206 03:27:48.081420 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa76c70_d21a_495e_8599_9ca195e8fe53.slice/crio-81caa77ef0cf73cb9f311fe1c3950534f5cbebdfedfe6b8216f92100672b0333 WatchSource:0}: Error finding container 81caa77ef0cf73cb9f311fe1c3950534f5cbebdfedfe6b8216f92100672b0333: Status 404 returned error can't find the container with id 81caa77ef0cf73cb9f311fe1c3950534f5cbebdfedfe6b8216f92100672b0333 Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.217532 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-664fff78fd-lzlf4"] Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.226190 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b46c4b9d6-q75vm"] Dec 06 03:27:48 crc kubenswrapper[4801]: W1206 03:27:48.226745 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd9cef0e_c1be_4f88_9c34_ac3ec60ffb78.slice/crio-6980d4d974ce582b1a39e7421f09311d4157f20259cdd6201d7ca0302d8c6c6e WatchSource:0}: Error finding container 6980d4d974ce582b1a39e7421f09311d4157f20259cdd6201d7ca0302d8c6c6e: Status 404 returned error can't find the container with id 6980d4d974ce582b1a39e7421f09311d4157f20259cdd6201d7ca0302d8c6c6e Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.327055 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-d4ndf"] Dec 06 03:27:48 crc kubenswrapper[4801]: W1206 03:27:48.350040 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065ef35f_50b6_4eb5_b46c_961b40e0e29f.slice/crio-d33d4ace74987b2b231068dae074d84f1540aca229842df9b69d077a47495229 WatchSource:0}: Error finding container d33d4ace74987b2b231068dae074d84f1540aca229842df9b69d077a47495229: Status 404 returned error can't find the container with id d33d4ace74987b2b231068dae074d84f1540aca229842df9b69d077a47495229 Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.523067 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55dddf74fb-zbzw5" event={"ID":"6f0a4a82-0f66-4716-9b11-fc2015676f79","Type":"ContainerStarted","Data":"c54d2668b3421824000b0e03e5dc80c62a37c3c9911b218f94d958d6b032d05c"} Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.523150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55dddf74fb-zbzw5" event={"ID":"6f0a4a82-0f66-4716-9b11-fc2015676f79","Type":"ContainerStarted","Data":"488e5ce782f0d917b500572e4d30567a40c8fdda6b634eb693574851a1e671b1"} Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.525158 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46c4b9d6-q75vm" event={"ID":"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78","Type":"ContainerStarted","Data":"6980d4d974ce582b1a39e7421f09311d4157f20259cdd6201d7ca0302d8c6c6e"} Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.534308 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" event={"ID":"065ef35f-50b6-4eb5-b46c-961b40e0e29f","Type":"ContainerStarted","Data":"d33d4ace74987b2b231068dae074d84f1540aca229842df9b69d077a47495229"} Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.539451 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" event={"ID":"65827ef2-44cd-4f16-82a9-9b746243a301","Type":"ContainerStarted","Data":"de0cd094045857b021b19e12419fcf80a698eff58d50df47841e1ecc67e9fe5d"} Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.544739 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c55bf9497-n2wf9" Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.546287 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" event={"ID":"3aa76c70-d21a-495e-8599-9ca195e8fe53","Type":"ContainerStarted","Data":"81caa77ef0cf73cb9f311fe1c3950534f5cbebdfedfe6b8216f92100672b0333"} Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.663040 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-n2wf9"] Dec 06 03:27:48 crc kubenswrapper[4801]: I1206 03:27:48.707700 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c55bf9497-n2wf9"] Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.234450 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61381354-8112-4fd4-bd1b-aa877af9f850" path="/var/lib/kubelet/pods/61381354-8112-4fd4-bd1b-aa877af9f850/volumes" Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.568797 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46c4b9d6-q75vm" event={"ID":"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78","Type":"ContainerStarted","Data":"748da77163e8e99d7ade93293b460c7abaee009e87b3beddae21f11ad06271f4"} Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.568851 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46c4b9d6-q75vm" event={"ID":"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78","Type":"ContainerStarted","Data":"f4f19ee7cf3632a73bea6db4e18c5b8737da2171cc0695597a60b35af5325638"} Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.568900 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.568923 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.571151 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55dddf74fb-zbzw5" event={"ID":"6f0a4a82-0f66-4716-9b11-fc2015676f79","Type":"ContainerStarted","Data":"01af7c14c1301d4d66df33b7bc6171de4894c60b9a939e4c66bdbec07d87d414"} Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.571669 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.571699 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.573618 4801 generic.go:334] "Generic (PLEG): container finished" podID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerID="f0c9254c8a46214b67f0c0e1bddac22016c28efdd6bacdde200e489a6efb21d1" exitCode=0 Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.573656 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" event={"ID":"065ef35f-50b6-4eb5-b46c-961b40e0e29f","Type":"ContainerDied","Data":"f0c9254c8a46214b67f0c0e1bddac22016c28efdd6bacdde200e489a6efb21d1"} Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.593717 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podStartSLOduration=3.593672027 podStartE2EDuration="3.593672027s" podCreationTimestamp="2025-12-06 03:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:27:49.589496676 +0000 UTC m=+1322.712104248" watchObservedRunningTime="2025-12-06 03:27:49.593672027 +0000 UTC m=+1322.716279599" Dec 06 03:27:49 crc kubenswrapper[4801]: I1206 03:27:49.650789 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55dddf74fb-zbzw5" podStartSLOduration=4.650771195 podStartE2EDuration="4.650771195s" podCreationTimestamp="2025-12-06 03:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:27:49.647441496 +0000 UTC m=+1322.770049068" watchObservedRunningTime="2025-12-06 03:27:49.650771195 +0000 UTC m=+1322.773378767" Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.616786 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" event={"ID":"3aa76c70-d21a-495e-8599-9ca195e8fe53","Type":"ContainerStarted","Data":"6df0e130a733350c53312918696fc3fff440bcdac0a166649653b5f834425449"} Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.617738 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" event={"ID":"3aa76c70-d21a-495e-8599-9ca195e8fe53","Type":"ContainerStarted","Data":"1baa8911d445b92ebcea735574153f504727c88db23b4e6f9416ac8cdc16f454"} Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.620839 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" event={"ID":"065ef35f-50b6-4eb5-b46c-961b40e0e29f","Type":"ContainerStarted","Data":"59da8b3ed7a8130f87344b2d9fc497b9866bd883d3a4263cdfe64c35c0e5226a"} Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.621317 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.626961 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" event={"ID":"65827ef2-44cd-4f16-82a9-9b746243a301","Type":"ContainerStarted","Data":"a0c9b0fb43a65ea0e49dd4f379118d5b69ddad5286787c0bde8f3d4e5e33e365"} Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.627375 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" event={"ID":"65827ef2-44cd-4f16-82a9-9b746243a301","Type":"ContainerStarted","Data":"5ccde450227ea8122ad7588228c604a20d3428adbaabc5b532d58ab962adfd0b"} Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.650683 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7bc55fb7dc-pm7jf" podStartSLOduration=3.379540881 podStartE2EDuration="5.650659115s" podCreationTimestamp="2025-12-06 03:27:46 +0000 UTC" firstStartedPulling="2025-12-06 03:27:48.085445078 +0000 UTC m=+1321.208052650" lastFinishedPulling="2025-12-06 03:27:50.356563322 +0000 UTC m=+1323.479170884" observedRunningTime="2025-12-06 03:27:51.642710663 +0000 UTC m=+1324.765318245" watchObservedRunningTime="2025-12-06 03:27:51.650659115 +0000 UTC m=+1324.773266687" Dec 06 03:27:51 crc kubenswrapper[4801]: I1206 03:27:51.671539 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" podStartSLOduration=4.671516683 podStartE2EDuration="4.671516683s" podCreationTimestamp="2025-12-06 03:27:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:27:51.668469591 +0000 UTC m=+1324.791077163" watchObservedRunningTime="2025-12-06 03:27:51.671516683 +0000 UTC m=+1324.794124255" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.159878 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-664fff78fd-lzlf4" podStartSLOduration=4.060153544 podStartE2EDuration="6.159860494s" podCreationTimestamp="2025-12-06 03:27:46 +0000 UTC" firstStartedPulling="2025-12-06 03:27:48.279076897 +0000 UTC m=+1321.401684469" lastFinishedPulling="2025-12-06 03:27:50.378783847 +0000 UTC m=+1323.501391419" observedRunningTime="2025-12-06 03:27:51.689000681 +0000 UTC m=+1324.811608253" watchObservedRunningTime="2025-12-06 03:27:52.159860494 +0000 UTC m=+1325.282468066" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.161221 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-66dd4c5cfd-fvt9d"] Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.222854 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66dd4c5cfd-fvt9d"] Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.223806 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.227483 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.232917 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.276904 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxclm\" (UniqueName: \"kubernetes.io/projected/146f23e1-de81-444d-88cb-a41601ffd36d-kube-api-access-wxclm\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.277118 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146f23e1-de81-444d-88cb-a41601ffd36d-logs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.277170 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-combined-ca-bundle\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.277188 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-public-tls-certs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.277261 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-config-data\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.277303 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-config-data-custom\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.277346 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-internal-tls-certs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378403 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-config-data-custom\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378460 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-internal-tls-certs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378507 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxclm\" (UniqueName: \"kubernetes.io/projected/146f23e1-de81-444d-88cb-a41601ffd36d-kube-api-access-wxclm\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378531 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146f23e1-de81-444d-88cb-a41601ffd36d-logs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378581 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-combined-ca-bundle\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378614 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-public-tls-certs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.378663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-config-data\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.379476 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/146f23e1-de81-444d-88cb-a41601ffd36d-logs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.383965 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-internal-tls-certs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.384217 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-config-data-custom\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.384785 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-combined-ca-bundle\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.389077 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-config-data\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.389559 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/146f23e1-de81-444d-88cb-a41601ffd36d-public-tls-certs\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.397543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxclm\" (UniqueName: \"kubernetes.io/projected/146f23e1-de81-444d-88cb-a41601ffd36d-kube-api-access-wxclm\") pod \"barbican-api-66dd4c5cfd-fvt9d\" (UID: \"146f23e1-de81-444d-88cb-a41601ffd36d\") " pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:52 crc kubenswrapper[4801]: I1206 03:27:52.545039 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:27:57 crc kubenswrapper[4801]: I1206 03:27:57.541005 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:27:57 crc kubenswrapper[4801]: I1206 03:27:57.617631 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-t8cjh"] Dec 06 03:27:57 crc kubenswrapper[4801]: I1206 03:27:57.617935 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" containerID="cri-o://6596d84f87e27602be7bbf8c835e1e3c73429d39fb031cf2eb171e7d3fe2c58d" gracePeriod=10 Dec 06 03:27:58 crc kubenswrapper[4801]: I1206 03:27:58.553009 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:27:58 crc kubenswrapper[4801]: I1206 03:27:58.553017 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:27:59 crc kubenswrapper[4801]: I1206 03:27:59.824427 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Dec 06 03:28:02 crc kubenswrapper[4801]: I1206 03:28:02.551948 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:02 crc kubenswrapper[4801]: I1206 03:28:02.551978 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:03 crc kubenswrapper[4801]: I1206 03:28:03.639017 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:03 crc kubenswrapper[4801]: I1206 03:28:03.639032 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:04 crc kubenswrapper[4801]: I1206 03:28:04.825089 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: connect: connection refused" Dec 06 03:28:07 crc kubenswrapper[4801]: I1206 03:28:07.635057 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:07 crc kubenswrapper[4801]: I1206 03:28:07.635333 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:07 crc kubenswrapper[4801]: I1206 03:28:07.794313 4801 generic.go:334] "Generic (PLEG): container finished" podID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerID="6596d84f87e27602be7bbf8c835e1e3c73429d39fb031cf2eb171e7d3fe2c58d" exitCode=0 Dec 06 03:28:07 crc kubenswrapper[4801]: I1206 03:28:07.794409 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" event={"ID":"e411b0df-3e92-41a9-a26b-0dea6c28cb97","Type":"ContainerDied","Data":"6596d84f87e27602be7bbf8c835e1e3c73429d39fb031cf2eb171e7d3fe2c58d"} Dec 06 03:28:07 crc kubenswrapper[4801]: I1206 03:28:07.882859 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:28:07 crc kubenswrapper[4801]: I1206 03:28:07.886626 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.169777 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.170415 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.402579 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.559406 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-nb\") pod \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.559462 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-dns-svc\") pod \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.559579 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-config\") pod \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.559735 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhcjn\" (UniqueName: \"kubernetes.io/projected/e411b0df-3e92-41a9-a26b-0dea6c28cb97-kube-api-access-rhcjn\") pod \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.559817 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-sb\") pod \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\" (UID: \"e411b0df-3e92-41a9-a26b-0dea6c28cb97\") " Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.567665 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e411b0df-3e92-41a9-a26b-0dea6c28cb97-kube-api-access-rhcjn" (OuterVolumeSpecName: "kube-api-access-rhcjn") pod "e411b0df-3e92-41a9-a26b-0dea6c28cb97" (UID: "e411b0df-3e92-41a9-a26b-0dea6c28cb97"). InnerVolumeSpecName "kube-api-access-rhcjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.610824 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e411b0df-3e92-41a9-a26b-0dea6c28cb97" (UID: "e411b0df-3e92-41a9-a26b-0dea6c28cb97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.616130 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e411b0df-3e92-41a9-a26b-0dea6c28cb97" (UID: "e411b0df-3e92-41a9-a26b-0dea6c28cb97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.617666 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-config" (OuterVolumeSpecName: "config") pod "e411b0df-3e92-41a9-a26b-0dea6c28cb97" (UID: "e411b0df-3e92-41a9-a26b-0dea6c28cb97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.640291 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e411b0df-3e92-41a9-a26b-0dea6c28cb97" (UID: "e411b0df-3e92-41a9-a26b-0dea6c28cb97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.662230 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.662274 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhcjn\" (UniqueName: \"kubernetes.io/projected/e411b0df-3e92-41a9-a26b-0dea6c28cb97-kube-api-access-rhcjn\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.662288 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.662298 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.662308 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e411b0df-3e92-41a9-a26b-0dea6c28cb97-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.752824 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-66dd4c5cfd-fvt9d"] Dec 06 03:28:11 crc kubenswrapper[4801]: W1206 03:28:11.752883 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod146f23e1_de81_444d_88cb_a41601ffd36d.slice/crio-872182532b4e64b140dbc70f72a61d68ee9d2460f277a54b6bfb76f89b92a1ca WatchSource:0}: Error finding container 872182532b4e64b140dbc70f72a61d68ee9d2460f277a54b6bfb76f89b92a1ca: Status 404 returned error can't find the container with id 872182532b4e64b140dbc70f72a61d68ee9d2460f277a54b6bfb76f89b92a1ca Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.849628 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7s22l" event={"ID":"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f","Type":"ContainerStarted","Data":"5d5c37e3b6a3b18af919da0a5823fb1c123a0f4a1461e56cc227aec4964136b9"} Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.851953 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" event={"ID":"146f23e1-de81-444d-88cb-a41601ffd36d","Type":"ContainerStarted","Data":"872182532b4e64b140dbc70f72a61d68ee9d2460f277a54b6bfb76f89b92a1ca"} Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.853264 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" event={"ID":"e411b0df-3e92-41a9-a26b-0dea6c28cb97","Type":"ContainerDied","Data":"7aa3d5d12d7413e6992a318f9efe64fdd22824ba3db8f5a1629343b3c7b3f3ed"} Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.853299 4801 scope.go:117] "RemoveContainer" containerID="6596d84f87e27602be7bbf8c835e1e3c73429d39fb031cf2eb171e7d3fe2c58d" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.853430 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.877816 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7s22l" podStartSLOduration=22.196288652 podStartE2EDuration="1m37.877795141s" podCreationTimestamp="2025-12-06 03:26:34 +0000 UTC" firstStartedPulling="2025-12-06 03:26:35.106803501 +0000 UTC m=+1248.229411073" lastFinishedPulling="2025-12-06 03:27:50.78830999 +0000 UTC m=+1323.910917562" observedRunningTime="2025-12-06 03:28:11.867739062 +0000 UTC m=+1344.990346634" watchObservedRunningTime="2025-12-06 03:28:11.877795141 +0000 UTC m=+1345.000402723" Dec 06 03:28:11 crc kubenswrapper[4801]: E1206 03:28:11.890263 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Dec 06 03:28:11 crc kubenswrapper[4801]: E1206 03:28:11.890557 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xq8mq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(28d06e7d-a469-4050-9f2c-db9da8389c58): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 03:28:11 crc kubenswrapper[4801]: E1206 03:28:11.891765 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.903099 4801 scope.go:117] "RemoveContainer" containerID="960c690df593d1971022c8bccd181fdceb0b3241c7c73ab56a7bba18ca9eed49" Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.936219 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-t8cjh"] Dec 06 03:28:11 crc kubenswrapper[4801]: I1206 03:28:11.943140 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-745b9ddc8c-t8cjh"] Dec 06 03:28:12 crc kubenswrapper[4801]: I1206 03:28:12.865920 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" event={"ID":"146f23e1-de81-444d-88cb-a41601ffd36d","Type":"ContainerStarted","Data":"5c5a6abd454c175fae8a1b9a9f490f08b9216f47eb650a8b7de0329bc1081196"} Dec 06 03:28:12 crc kubenswrapper[4801]: I1206 03:28:12.867708 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="ceilometer-notification-agent" containerID="cri-o://e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7" gracePeriod=30 Dec 06 03:28:12 crc kubenswrapper[4801]: I1206 03:28:12.867822 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="sg-core" containerID="cri-o://f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac" gracePeriod=30 Dec 06 03:28:13 crc kubenswrapper[4801]: I1206 03:28:13.223434 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" path="/var/lib/kubelet/pods/e411b0df-3e92-41a9-a26b-0dea6c28cb97/volumes" Dec 06 03:28:13 crc kubenswrapper[4801]: I1206 03:28:13.901099 4801 generic.go:334] "Generic (PLEG): container finished" podID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerID="f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac" exitCode=2 Dec 06 03:28:13 crc kubenswrapper[4801]: I1206 03:28:13.901150 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d06e7d-a469-4050-9f2c-db9da8389c58","Type":"ContainerDied","Data":"f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac"} Dec 06 03:28:14 crc kubenswrapper[4801]: I1206 03:28:14.824162 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-745b9ddc8c-t8cjh" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.132:5353: i/o timeout" Dec 06 03:28:14 crc kubenswrapper[4801]: I1206 03:28:14.912094 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" event={"ID":"146f23e1-de81-444d-88cb-a41601ffd36d","Type":"ContainerStarted","Data":"fc93b4733ccb33922747d65e5e439a4844b361a86235e325b084b6a7d3bc63b9"} Dec 06 03:28:15 crc kubenswrapper[4801]: I1206 03:28:15.927197 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:28:15 crc kubenswrapper[4801]: I1206 03:28:15.928126 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:28:15 crc kubenswrapper[4801]: I1206 03:28:15.966376 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" podStartSLOduration=23.966344635 podStartE2EDuration="23.966344635s" podCreationTimestamp="2025-12-06 03:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:28:15.954394525 +0000 UTC m=+1349.077002107" watchObservedRunningTime="2025-12-06 03:28:15.966344635 +0000 UTC m=+1349.088952217" Dec 06 03:28:18 crc kubenswrapper[4801]: I1206 03:28:18.217657 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-66f8fdb7b9-xsvqm" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.351292 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.829528 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865516 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-config-data\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865584 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-log-httpd\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865615 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-scripts\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865637 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq8mq\" (UniqueName: \"kubernetes.io/projected/28d06e7d-a469-4050-9f2c-db9da8389c58-kube-api-access-xq8mq\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865682 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-combined-ca-bundle\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865724 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-sg-core-conf-yaml\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.865771 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-run-httpd\") pod \"28d06e7d-a469-4050-9f2c-db9da8389c58\" (UID: \"28d06e7d-a469-4050-9f2c-db9da8389c58\") " Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.867153 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.867913 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.872348 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28d06e7d-a469-4050-9f2c-db9da8389c58-kube-api-access-xq8mq" (OuterVolumeSpecName: "kube-api-access-xq8mq") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "kube-api-access-xq8mq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.873451 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-scripts" (OuterVolumeSpecName: "scripts") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.894012 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-config-data" (OuterVolumeSpecName: "config-data") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.902822 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.916621 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28d06e7d-a469-4050-9f2c-db9da8389c58" (UID: "28d06e7d-a469-4050-9f2c-db9da8389c58"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.949836 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 03:28:21 crc kubenswrapper[4801]: E1206 03:28:21.950455 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950484 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" Dec 06 03:28:21 crc kubenswrapper[4801]: E1206 03:28:21.950501 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="init" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950510 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="init" Dec 06 03:28:21 crc kubenswrapper[4801]: E1206 03:28:21.950543 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="sg-core" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950552 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="sg-core" Dec 06 03:28:21 crc kubenswrapper[4801]: E1206 03:28:21.950582 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="ceilometer-notification-agent" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950592 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="ceilometer-notification-agent" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950858 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="sg-core" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950909 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerName="ceilometer-notification-agent" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.950923 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e411b0df-3e92-41a9-a26b-0dea6c28cb97" containerName="dnsmasq-dns" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.951799 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.954400 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.954621 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-9rpp6" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.956614 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.960769 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.966827 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.966874 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znnnc\" (UniqueName: \"kubernetes.io/projected/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-kube-api-access-znnnc\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.966944 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-openstack-config\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967243 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-openstack-config-secret\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967463 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967490 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967508 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967520 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28d06e7d-a469-4050-9f2c-db9da8389c58-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967532 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967548 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq8mq\" (UniqueName: \"kubernetes.io/projected/28d06e7d-a469-4050-9f2c-db9da8389c58-kube-api-access-xq8mq\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:21 crc kubenswrapper[4801]: I1206 03:28:21.967563 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28d06e7d-a469-4050-9f2c-db9da8389c58-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.005353 4801 generic.go:334] "Generic (PLEG): container finished" podID="28d06e7d-a469-4050-9f2c-db9da8389c58" containerID="e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7" exitCode=0 Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.005403 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d06e7d-a469-4050-9f2c-db9da8389c58","Type":"ContainerDied","Data":"e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7"} Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.005445 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28d06e7d-a469-4050-9f2c-db9da8389c58","Type":"ContainerDied","Data":"e1e8e337f5686a69831b912a0d04945b422a9aed85a1e51729e9b7ff8def905a"} Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.005468 4801 scope.go:117] "RemoveContainer" containerID="f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.005642 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.033131 4801 scope.go:117] "RemoveContainer" containerID="e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.069629 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.069688 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znnnc\" (UniqueName: \"kubernetes.io/projected/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-kube-api-access-znnnc\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.069773 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-openstack-config\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.069820 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-openstack-config-secret\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.073617 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-openstack-config\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.075988 4801 scope.go:117] "RemoveContainer" containerID="f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac" Dec 06 03:28:22 crc kubenswrapper[4801]: E1206 03:28:22.076884 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac\": container with ID starting with f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac not found: ID does not exist" containerID="f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.076910 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac"} err="failed to get container status \"f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac\": rpc error: code = NotFound desc = could not find container \"f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac\": container with ID starting with f081987a300dc9362486604e530e40621bfa08d745b6143c13ce3692804458ac not found: ID does not exist" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.076975 4801 scope.go:117] "RemoveContainer" containerID="e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.076952 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: E1206 03:28:22.077236 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7\": container with ID starting with e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7 not found: ID does not exist" containerID="e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.077274 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7"} err="failed to get container status \"e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7\": rpc error: code = NotFound desc = could not find container \"e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7\": container with ID starting with e3fd6655fd136eb94acd0f27c55d656668a24c36a9d03a3c3eaf3b6a68b710e7 not found: ID does not exist" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.077573 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-openstack-config-secret\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.078177 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.092507 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.094210 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znnnc\" (UniqueName: \"kubernetes.io/projected/2e59b1c6-c154-42d6-8b79-b35b3bf48cf7-kube-api-access-znnnc\") pod \"openstackclient\" (UID: \"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7\") " pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.103601 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.105849 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.109636 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.110420 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.119422 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmb6k\" (UniqueName: \"kubernetes.io/projected/41094792-7d92-4816-8ce1-cda462529daf-kube-api-access-wmb6k\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172649 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-log-httpd\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172679 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-scripts\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172718 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172798 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-run-httpd\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.172825 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-config-data\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273363 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-run-httpd\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-config-data\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273479 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmb6k\" (UniqueName: \"kubernetes.io/projected/41094792-7d92-4816-8ce1-cda462529daf-kube-api-access-wmb6k\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273521 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-log-httpd\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273546 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-scripts\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273655 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.274018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273981 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-log-httpd\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.273934 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-run-httpd\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.277586 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.278258 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-config-data\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.278495 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-scripts\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.278833 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.284047 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.292683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmb6k\" (UniqueName: \"kubernetes.io/projected/41094792-7d92-4816-8ce1-cda462529daf-kube-api-access-wmb6k\") pod \"ceilometer-0\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.469392 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.725631 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.738240 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:28:22 crc kubenswrapper[4801]: I1206 03:28:22.929575 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:28:22 crc kubenswrapper[4801]: W1206 03:28:22.931924 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41094792_7d92_4816_8ce1_cda462529daf.slice/crio-4033288a4768fc6677773b7ac03540f1e1b07cc6d0efadde80be41f8e4184d92 WatchSource:0}: Error finding container 4033288a4768fc6677773b7ac03540f1e1b07cc6d0efadde80be41f8e4184d92: Status 404 returned error can't find the container with id 4033288a4768fc6677773b7ac03540f1e1b07cc6d0efadde80be41f8e4184d92 Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.014594 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7","Type":"ContainerStarted","Data":"f361ccf48f84413fe316a1040c4a3e91c6d9217575c67c1589e9fd943afcda4e"} Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.017602 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerStarted","Data":"4033288a4768fc6677773b7ac03540f1e1b07cc6d0efadde80be41f8e4184d92"} Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.032243 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.037395 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55dddf74fb-zbzw5" Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.221580 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28d06e7d-a469-4050-9f2c-db9da8389c58" path="/var/lib/kubelet/pods/28d06e7d-a469-4050-9f2c-db9da8389c58/volumes" Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.241472 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-66dd4c5cfd-fvt9d" Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.325640 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b46c4b9d6-q75vm"] Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.325908 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" containerID="cri-o://f4f19ee7cf3632a73bea6db4e18c5b8737da2171cc0695597a60b35af5325638" gracePeriod=30 Dec 06 03:28:23 crc kubenswrapper[4801]: I1206 03:28:23.326075 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" containerID="cri-o://748da77163e8e99d7ade93293b460c7abaee009e87b3beddae21f11ad06271f4" gracePeriod=30 Dec 06 03:28:25 crc kubenswrapper[4801]: I1206 03:28:25.035617 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerID="f4f19ee7cf3632a73bea6db4e18c5b8737da2171cc0695597a60b35af5325638" exitCode=143 Dec 06 03:28:25 crc kubenswrapper[4801]: I1206 03:28:25.035807 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46c4b9d6-q75vm" event={"ID":"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78","Type":"ContainerDied","Data":"f4f19ee7cf3632a73bea6db4e18c5b8737da2171cc0695597a60b35af5325638"} Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.055184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerStarted","Data":"8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8"} Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.058471 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerID="748da77163e8e99d7ade93293b460c7abaee009e87b3beddae21f11ad06271f4" exitCode=0 Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.058514 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46c4b9d6-q75vm" event={"ID":"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78","Type":"ContainerDied","Data":"748da77163e8e99d7ade93293b460c7abaee009e87b3beddae21f11ad06271f4"} Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.517277 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.665668 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data-custom\") pod \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.666250 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-combined-ca-bundle\") pod \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.666283 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9bn9\" (UniqueName: \"kubernetes.io/projected/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-kube-api-access-n9bn9\") pod \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.666334 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data\") pod \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.666590 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-logs\") pod \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\" (UID: \"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78\") " Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.667966 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-logs" (OuterVolumeSpecName: "logs") pod "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" (UID: "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.681435 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" (UID: "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.681489 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-kube-api-access-n9bn9" (OuterVolumeSpecName: "kube-api-access-n9bn9") pod "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" (UID: "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78"). InnerVolumeSpecName "kube-api-access-n9bn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.707999 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" (UID: "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.729915 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data" (OuterVolumeSpecName: "config-data") pod "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" (UID: "fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.769796 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.769843 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.769857 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.769873 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9bn9\" (UniqueName: \"kubernetes.io/projected/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-kube-api-access-n9bn9\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:27 crc kubenswrapper[4801]: I1206 03:28:27.769884 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.070724 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b46c4b9d6-q75vm" event={"ID":"fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78","Type":"ContainerDied","Data":"6980d4d974ce582b1a39e7421f09311d4157f20259cdd6201d7ca0302d8c6c6e"} Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.070831 4801 scope.go:117] "RemoveContainer" containerID="748da77163e8e99d7ade93293b460c7abaee009e87b3beddae21f11ad06271f4" Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.070971 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b46c4b9d6-q75vm" Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.083404 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerStarted","Data":"beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3"} Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.107625 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b46c4b9d6-q75vm"] Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.114976 4801 scope.go:117] "RemoveContainer" containerID="f4f19ee7cf3632a73bea6db4e18c5b8737da2171cc0695597a60b35af5325638" Dec 06 03:28:28 crc kubenswrapper[4801]: I1206 03:28:28.117543 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b46c4b9d6-q75vm"] Dec 06 03:28:29 crc kubenswrapper[4801]: I1206 03:28:29.223157 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" path="/var/lib/kubelet/pods/fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78/volumes" Dec 06 03:28:32 crc kubenswrapper[4801]: I1206 03:28:32.470968 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:32 crc kubenswrapper[4801]: I1206 03:28:32.471118 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b46c4b9d6-q75vm" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.141:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:28:33 crc kubenswrapper[4801]: I1206 03:28:33.136002 4801 generic.go:334] "Generic (PLEG): container finished" podID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" containerID="5d5c37e3b6a3b18af919da0a5823fb1c123a0f4a1461e56cc227aec4964136b9" exitCode=0 Dec 06 03:28:33 crc kubenswrapper[4801]: I1206 03:28:33.136094 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7s22l" event={"ID":"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f","Type":"ContainerDied","Data":"5d5c37e3b6a3b18af919da0a5823fb1c123a0f4a1461e56cc227aec4964136b9"} Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.150087 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerStarted","Data":"f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d"} Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.152114 4801 generic.go:334] "Generic (PLEG): container finished" podID="57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" containerID="b929d467eea811d7bb7b6b5814208db044bb55663ad964587efa6bd04d133433" exitCode=0 Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.152195 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8db5z" event={"ID":"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb","Type":"ContainerDied","Data":"b929d467eea811d7bb7b6b5814208db044bb55663ad964587efa6bd04d133433"} Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.153656 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2e59b1c6-c154-42d6-8b79-b35b3bf48cf7","Type":"ContainerStarted","Data":"2a3f83405a4a9600936778a079cbd0386942d3dff6b352c2f3376cd81c2e1b47"} Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.588633 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7s22l" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.698595 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px9vw\" (UniqueName: \"kubernetes.io/projected/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-kube-api-access-px9vw\") pod \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.698683 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-combined-ca-bundle\") pod \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.698706 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-scripts\") pod \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.698769 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-db-sync-config-data\") pod \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.698786 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-config-data\") pod \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.698829 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-etc-machine-id\") pod \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\" (UID: \"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f\") " Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.699228 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" (UID: "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.706220 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-scripts" (OuterVolumeSpecName: "scripts") pod "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" (UID: "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.706246 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-kube-api-access-px9vw" (OuterVolumeSpecName: "kube-api-access-px9vw") pod "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" (UID: "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f"). InnerVolumeSpecName "kube-api-access-px9vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.706646 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" (UID: "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.734686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" (UID: "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.765969 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-config-data" (OuterVolumeSpecName: "config-data") pod "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" (UID: "e8a2ead4-9b5d-465c-9b4a-5c7377ad246f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.800406 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px9vw\" (UniqueName: \"kubernetes.io/projected/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-kube-api-access-px9vw\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.800454 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.800467 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.800480 4801 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.800490 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:34 crc kubenswrapper[4801]: I1206 03:28:34.800500 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.164194 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7s22l" event={"ID":"e8a2ead4-9b5d-465c-9b4a-5c7377ad246f","Type":"ContainerDied","Data":"71a9b50d9d8fe688a91fd1d49ea5ae459f7515a9d16c97b0e98dfee06d94a963"} Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.164244 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a9b50d9d8fe688a91fd1d49ea5ae459f7515a9d16c97b0e98dfee06d94a963" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.164366 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7s22l" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.215981 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.386064583 podStartE2EDuration="14.215959272s" podCreationTimestamp="2025-12-06 03:28:21 +0000 UTC" firstStartedPulling="2025-12-06 03:28:22.737962401 +0000 UTC m=+1355.860569973" lastFinishedPulling="2025-12-06 03:28:33.56785709 +0000 UTC m=+1366.690464662" observedRunningTime="2025-12-06 03:28:35.194329021 +0000 UTC m=+1368.316936593" watchObservedRunningTime="2025-12-06 03:28:35.215959272 +0000 UTC m=+1368.338566844" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.422824 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:28:35 crc kubenswrapper[4801]: E1206 03:28:35.423924 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.429346 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" Dec 06 03:28:35 crc kubenswrapper[4801]: E1206 03:28:35.429450 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" containerName="cinder-db-sync" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.429463 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" containerName="cinder-db-sync" Dec 06 03:28:35 crc kubenswrapper[4801]: E1206 03:28:35.429487 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.429496 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.429843 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api-log" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.429887 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9cef0e-c1be-4f88-9c34-ac3ec60ffb78" containerName="barbican-api" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.429905 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" containerName="cinder-db-sync" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.431066 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.433708 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.434090 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.434270 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-f6gpr" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.434437 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.452206 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.511639 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.511683 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.511702 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.511793 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-scripts\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.511826 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zgn\" (UniqueName: \"kubernetes.io/projected/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-kube-api-access-v7zgn\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.511907 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.520386 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-pw8qg"] Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.521917 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.533297 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-pw8qg"] Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.615030 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.615158 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.615679 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.615781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.616019 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.616320 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.616998 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.617059 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-config\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.626453 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.626925 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.628118 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.630901 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nl2v\" (UniqueName: \"kubernetes.io/projected/4767c0d4-41d1-471f-aa45-52f092ca5191-kube-api-access-4nl2v\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.630962 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-scripts\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.630985 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7zgn\" (UniqueName: \"kubernetes.io/projected/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-kube-api-access-v7zgn\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.631071 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.640338 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-scripts\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.666239 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7zgn\" (UniqueName: \"kubernetes.io/projected/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-kube-api-access-v7zgn\") pod \"cinder-scheduler-0\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.723438 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.725219 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.730292 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.732725 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.732898 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.732957 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-config\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.732979 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.733023 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nl2v\" (UniqueName: \"kubernetes.io/projected/4767c0d4-41d1-471f-aa45-52f092ca5191-kube-api-access-4nl2v\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.734299 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-sb\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.734348 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-nb\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.735069 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-config\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.735785 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-dns-svc\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.751357 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.761377 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nl2v\" (UniqueName: \"kubernetes.io/projected/4767c0d4-41d1-471f-aa45-52f092ca5191-kube-api-access-4nl2v\") pod \"dnsmasq-dns-5b76cdf485-pw8qg\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.775244 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835046 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0179e86f-f9ff-4945-8870-7c54764ee77d-logs\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835128 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data-custom\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835308 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0179e86f-f9ff-4945-8870-7c54764ee77d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835384 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835500 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-scripts\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835545 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6gw\" (UniqueName: \"kubernetes.io/projected/0179e86f-f9ff-4945-8870-7c54764ee77d-kube-api-access-rv6gw\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.835565 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.846506 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937644 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-scripts\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6gw\" (UniqueName: \"kubernetes.io/projected/0179e86f-f9ff-4945-8870-7c54764ee77d-kube-api-access-rv6gw\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937740 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937829 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0179e86f-f9ff-4945-8870-7c54764ee77d-logs\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937880 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data-custom\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937926 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0179e86f-f9ff-4945-8870-7c54764ee77d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.937952 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.938369 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0179e86f-f9ff-4945-8870-7c54764ee77d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.938711 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0179e86f-f9ff-4945-8870-7c54764ee77d-logs\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.942706 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-scripts\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.943984 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.945811 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data-custom\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.958559 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:35 crc kubenswrapper[4801]: I1206 03:28:35.961398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6gw\" (UniqueName: \"kubernetes.io/projected/0179e86f-f9ff-4945-8870-7c54764ee77d-kube-api-access-rv6gw\") pod \"cinder-api-0\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " pod="openstack/cinder-api-0" Dec 06 03:28:36 crc kubenswrapper[4801]: I1206 03:28:36.048527 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.180420 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-8db5z" event={"ID":"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb","Type":"ContainerDied","Data":"35cda055fe4013403cd51980f0aeef3b2ca2bf644acb48778157611ca7ec21c7"} Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.180990 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35cda055fe4013403cd51980f0aeef3b2ca2bf644acb48778157611ca7ec21c7" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.263653 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8db5z" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.358357 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpzpc\" (UniqueName: \"kubernetes.io/projected/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-kube-api-access-rpzpc\") pod \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.358428 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-config\") pod \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.358619 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-combined-ca-bundle\") pod \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\" (UID: \"57fef54f-ef5f-4e2b-b0d1-d4ce567280fb\") " Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.363051 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-kube-api-access-rpzpc" (OuterVolumeSpecName: "kube-api-access-rpzpc") pod "57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" (UID: "57fef54f-ef5f-4e2b-b0d1-d4ce567280fb"). InnerVolumeSpecName "kube-api-access-rpzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.392997 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" (UID: "57fef54f-ef5f-4e2b-b0d1-d4ce567280fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.397534 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-config" (OuterVolumeSpecName: "config") pod "57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" (UID: "57fef54f-ef5f-4e2b-b0d1-d4ce567280fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.460986 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.461235 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpzpc\" (UniqueName: \"kubernetes.io/projected/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-kube-api-access-rpzpc\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.461249 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.795180 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:28:37 crc kubenswrapper[4801]: I1206 03:28:37.940174 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.063864 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:28:38 crc kubenswrapper[4801]: W1206 03:28:38.066688 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0e221d0_6ea7_4bf8_b89e_5c4b65d9e177.slice/crio-08b7a437f5a64123bae54ceba26a433b632b0c9fb9944e805b595fafd9fab681 WatchSource:0}: Error finding container 08b7a437f5a64123bae54ceba26a433b632b0c9fb9944e805b595fafd9fab681: Status 404 returned error can't find the container with id 08b7a437f5a64123bae54ceba26a433b632b0c9fb9944e805b595fafd9fab681 Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.085485 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-pw8qg"] Dec 06 03:28:38 crc kubenswrapper[4801]: W1206 03:28:38.095444 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4767c0d4_41d1_471f_aa45_52f092ca5191.slice/crio-0efdf2d3535f6dd69b97d63528a79936bbdc950cd3e2361480cbd90c023a65ff WatchSource:0}: Error finding container 0efdf2d3535f6dd69b97d63528a79936bbdc950cd3e2361480cbd90c023a65ff: Status 404 returned error can't find the container with id 0efdf2d3535f6dd69b97d63528a79936bbdc950cd3e2361480cbd90c023a65ff Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.193351 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0179e86f-f9ff-4945-8870-7c54764ee77d","Type":"ContainerStarted","Data":"3dddd30f84ffd001702416ec963b3ea03efbfd8f335c10a202a75feac4db9788"} Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.194554 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" event={"ID":"4767c0d4-41d1-471f-aa45-52f092ca5191","Type":"ContainerStarted","Data":"0efdf2d3535f6dd69b97d63528a79936bbdc950cd3e2361480cbd90c023a65ff"} Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.195581 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-8db5z" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.196401 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177","Type":"ContainerStarted","Data":"08b7a437f5a64123bae54ceba26a433b632b0c9fb9944e805b595fafd9fab681"} Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.522149 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.536217 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-pw8qg"] Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.563788 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-4q9fn"] Dec 06 03:28:38 crc kubenswrapper[4801]: E1206 03:28:38.564335 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" containerName="neutron-db-sync" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.564353 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" containerName="neutron-db-sync" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.564525 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" containerName="neutron-db-sync" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.569515 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.580422 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-4q9fn"] Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.599610 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74c6fcb784-b9mbt"] Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.605609 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.616041 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-slq6m" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.616536 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.616594 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.622231 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6fcb784-b9mbt"] Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.629035 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.692370 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.692433 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.692499 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-config\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.692516 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mzxl\" (UniqueName: \"kubernetes.io/projected/1279875a-a29e-48df-9631-e248326cecfa-kube-api-access-4mzxl\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.692589 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.793526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-config\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.793858 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mzxl\" (UniqueName: \"kubernetes.io/projected/1279875a-a29e-48df-9631-e248326cecfa-kube-api-access-4mzxl\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.793929 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-httpd-config\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.793965 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-ovndb-tls-certs\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.793987 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-combined-ca-bundle\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.794012 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-config\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.794034 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.794060 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.794081 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sscqs\" (UniqueName: \"kubernetes.io/projected/d93d32ae-f984-4eac-9fdf-80479f40f4bb-kube-api-access-sscqs\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.794110 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.794913 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.795580 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-config\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.796599 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.797771 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.827963 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mzxl\" (UniqueName: \"kubernetes.io/projected/1279875a-a29e-48df-9631-e248326cecfa-kube-api-access-4mzxl\") pod \"dnsmasq-dns-6d97fcdd8f-4q9fn\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.896157 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-ovndb-tls-certs\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.896215 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-combined-ca-bundle\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.896253 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-config\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.896302 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sscqs\" (UniqueName: \"kubernetes.io/projected/d93d32ae-f984-4eac-9fdf-80479f40f4bb-kube-api-access-sscqs\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.896426 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-httpd-config\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.907845 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-ovndb-tls-certs\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.916190 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-combined-ca-bundle\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.916582 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.917500 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-httpd-config\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.917905 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-config\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:38 crc kubenswrapper[4801]: I1206 03:28:38.920247 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sscqs\" (UniqueName: \"kubernetes.io/projected/d93d32ae-f984-4eac-9fdf-80479f40f4bb-kube-api-access-sscqs\") pod \"neutron-74c6fcb784-b9mbt\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:39 crc kubenswrapper[4801]: I1206 03:28:39.048049 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:28:39 crc kubenswrapper[4801]: I1206 03:28:39.418796 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-4q9fn"] Dec 06 03:28:39 crc kubenswrapper[4801]: W1206 03:28:39.421048 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1279875a_a29e_48df_9631_e248326cecfa.slice/crio-28c0c4f72e140aa7dc5b502a0265ba05bd0b25642c63928a73b1b1cdfa78ec17 WatchSource:0}: Error finding container 28c0c4f72e140aa7dc5b502a0265ba05bd0b25642c63928a73b1b1cdfa78ec17: Status 404 returned error can't find the container with id 28c0c4f72e140aa7dc5b502a0265ba05bd0b25642c63928a73b1b1cdfa78ec17 Dec 06 03:28:40 crc kubenswrapper[4801]: I1206 03:28:40.229081 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" event={"ID":"1279875a-a29e-48df-9631-e248326cecfa","Type":"ContainerStarted","Data":"28c0c4f72e140aa7dc5b502a0265ba05bd0b25642c63928a73b1b1cdfa78ec17"} Dec 06 03:28:40 crc kubenswrapper[4801]: I1206 03:28:40.477562 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74c6fcb784-b9mbt"] Dec 06 03:28:40 crc kubenswrapper[4801]: W1206 03:28:40.478509 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd93d32ae_f984_4eac_9fdf_80479f40f4bb.slice/crio-4c18a16dd6af8ca679a812e5f2156b3f5aceffdb099d8adcbaf70daa0ca3c5fc WatchSource:0}: Error finding container 4c18a16dd6af8ca679a812e5f2156b3f5aceffdb099d8adcbaf70daa0ca3c5fc: Status 404 returned error can't find the container with id 4c18a16dd6af8ca679a812e5f2156b3f5aceffdb099d8adcbaf70daa0ca3c5fc Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.169985 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.170079 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.240505 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6fcb784-b9mbt" event={"ID":"d93d32ae-f984-4eac-9fdf-80479f40f4bb","Type":"ContainerStarted","Data":"4c18a16dd6af8ca679a812e5f2156b3f5aceffdb099d8adcbaf70daa0ca3c5fc"} Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.859794 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-554d4f888f-vn47n"] Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.861498 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.863867 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.864526 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.884357 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-554d4f888f-vn47n"] Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.884979 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-ovndb-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.885057 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-combined-ca-bundle\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.885147 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-config\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.885219 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-public-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.885431 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn6vf\" (UniqueName: \"kubernetes.io/projected/87b90546-3593-40c2-9be7-84187756b4cf-kube-api-access-jn6vf\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.885583 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-internal-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.885734 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-httpd-config\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-config\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988579 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-public-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988684 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn6vf\" (UniqueName: \"kubernetes.io/projected/87b90546-3593-40c2-9be7-84187756b4cf-kube-api-access-jn6vf\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988773 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-internal-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988876 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-httpd-config\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988950 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-ovndb-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.988984 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-combined-ca-bundle\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.996965 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-combined-ca-bundle\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.997139 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-public-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:41 crc kubenswrapper[4801]: I1206 03:28:41.997396 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-config\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:42 crc kubenswrapper[4801]: I1206 03:28:42.001052 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-ovndb-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:42 crc kubenswrapper[4801]: I1206 03:28:42.001400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-httpd-config\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:42 crc kubenswrapper[4801]: I1206 03:28:42.004720 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87b90546-3593-40c2-9be7-84187756b4cf-internal-tls-certs\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:42 crc kubenswrapper[4801]: I1206 03:28:42.011772 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn6vf\" (UniqueName: \"kubernetes.io/projected/87b90546-3593-40c2-9be7-84187756b4cf-kube-api-access-jn6vf\") pod \"neutron-554d4f888f-vn47n\" (UID: \"87b90546-3593-40c2-9be7-84187756b4cf\") " pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:42 crc kubenswrapper[4801]: I1206 03:28:42.192185 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:28:42 crc kubenswrapper[4801]: I1206 03:28:42.856241 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-554d4f888f-vn47n"] Dec 06 03:28:43 crc kubenswrapper[4801]: I1206 03:28:43.296844 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-554d4f888f-vn47n" event={"ID":"87b90546-3593-40c2-9be7-84187756b4cf","Type":"ContainerStarted","Data":"2ad41cd41b1b5a91ed4510a52d6514fbcad279a9d97e3b415159bccc4739ef65"} Dec 06 03:28:43 crc kubenswrapper[4801]: I1206 03:28:43.301977 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerStarted","Data":"a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c"} Dec 06 03:28:46 crc kubenswrapper[4801]: I1206 03:28:46.335610 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" event={"ID":"4767c0d4-41d1-471f-aa45-52f092ca5191","Type":"ContainerStarted","Data":"5e1ab572489227d963998dec3888641eec710faff627229a681074e478609c23"} Dec 06 03:28:48 crc kubenswrapper[4801]: I1206 03:28:48.355199 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0179e86f-f9ff-4945-8870-7c54764ee77d","Type":"ContainerStarted","Data":"ddbe97a3611e2dfcb8a3f3e8c7ff9c28f12c6687c5df85ff39476fa0419b2119"} Dec 06 03:28:49 crc kubenswrapper[4801]: I1206 03:28:49.370582 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6fcb784-b9mbt" event={"ID":"d93d32ae-f984-4eac-9fdf-80479f40f4bb","Type":"ContainerStarted","Data":"e5af1605c3016f7ad5c03ce67dc882858c2ec53a4da1a0729ad09e2517eeb1bf"} Dec 06 03:28:49 crc kubenswrapper[4801]: I1206 03:28:49.377287 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" event={"ID":"1279875a-a29e-48df-9631-e248326cecfa","Type":"ContainerStarted","Data":"de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6"} Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.388258 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-554d4f888f-vn47n" event={"ID":"87b90546-3593-40c2-9be7-84187756b4cf","Type":"ContainerStarted","Data":"849b3b4c98e91be6121efdf1d4e1dfa7800d9a461b2c6400eac1a37315f76c0f"} Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.391155 4801 generic.go:334] "Generic (PLEG): container finished" podID="4767c0d4-41d1-471f-aa45-52f092ca5191" containerID="5e1ab572489227d963998dec3888641eec710faff627229a681074e478609c23" exitCode=0 Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.391350 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-central-agent" containerID="cri-o://8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8" gracePeriod=30 Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.391733 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" event={"ID":"4767c0d4-41d1-471f-aa45-52f092ca5191","Type":"ContainerDied","Data":"5e1ab572489227d963998dec3888641eec710faff627229a681074e478609c23"} Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.391807 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.392007 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-notification-agent" containerID="cri-o://beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3" gracePeriod=30 Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.392061 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="sg-core" containerID="cri-o://f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d" gracePeriod=30 Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.393597 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="proxy-httpd" containerID="cri-o://a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c" gracePeriod=30 Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.453553 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=13.951128687 podStartE2EDuration="28.453524414s" podCreationTimestamp="2025-12-06 03:28:22 +0000 UTC" firstStartedPulling="2025-12-06 03:28:22.938860655 +0000 UTC m=+1356.061468227" lastFinishedPulling="2025-12-06 03:28:37.441256382 +0000 UTC m=+1370.563863954" observedRunningTime="2025-12-06 03:28:50.442456232 +0000 UTC m=+1383.565063824" watchObservedRunningTime="2025-12-06 03:28:50.453524414 +0000 UTC m=+1383.576131986" Dec 06 03:28:50 crc kubenswrapper[4801]: I1206 03:28:50.968300 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.097652 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-config\") pod \"4767c0d4-41d1-471f-aa45-52f092ca5191\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.098352 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nl2v\" (UniqueName: \"kubernetes.io/projected/4767c0d4-41d1-471f-aa45-52f092ca5191-kube-api-access-4nl2v\") pod \"4767c0d4-41d1-471f-aa45-52f092ca5191\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.098485 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-sb\") pod \"4767c0d4-41d1-471f-aa45-52f092ca5191\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.098566 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-nb\") pod \"4767c0d4-41d1-471f-aa45-52f092ca5191\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.098639 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-dns-svc\") pod \"4767c0d4-41d1-471f-aa45-52f092ca5191\" (UID: \"4767c0d4-41d1-471f-aa45-52f092ca5191\") " Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.105181 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4767c0d4-41d1-471f-aa45-52f092ca5191-kube-api-access-4nl2v" (OuterVolumeSpecName: "kube-api-access-4nl2v") pod "4767c0d4-41d1-471f-aa45-52f092ca5191" (UID: "4767c0d4-41d1-471f-aa45-52f092ca5191"). InnerVolumeSpecName "kube-api-access-4nl2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.125980 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4767c0d4-41d1-471f-aa45-52f092ca5191" (UID: "4767c0d4-41d1-471f-aa45-52f092ca5191"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.128392 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4767c0d4-41d1-471f-aa45-52f092ca5191" (UID: "4767c0d4-41d1-471f-aa45-52f092ca5191"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.128462 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-config" (OuterVolumeSpecName: "config") pod "4767c0d4-41d1-471f-aa45-52f092ca5191" (UID: "4767c0d4-41d1-471f-aa45-52f092ca5191"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.134826 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4767c0d4-41d1-471f-aa45-52f092ca5191" (UID: "4767c0d4-41d1-471f-aa45-52f092ca5191"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.200490 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.200543 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nl2v\" (UniqueName: \"kubernetes.io/projected/4767c0d4-41d1-471f-aa45-52f092ca5191-kube-api-access-4nl2v\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.200558 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.200568 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.200581 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4767c0d4-41d1-471f-aa45-52f092ca5191-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.403723 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" event={"ID":"4767c0d4-41d1-471f-aa45-52f092ca5191","Type":"ContainerDied","Data":"0efdf2d3535f6dd69b97d63528a79936bbdc950cd3e2361480cbd90c023a65ff"} Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.403816 4801 scope.go:117] "RemoveContainer" containerID="5e1ab572489227d963998dec3888641eec710faff627229a681074e478609c23" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.403856 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b76cdf485-pw8qg" Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.477882 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-pw8qg"] Dec 06 03:28:51 crc kubenswrapper[4801]: I1206 03:28:51.498530 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b76cdf485-pw8qg"] Dec 06 03:28:52 crc kubenswrapper[4801]: I1206 03:28:52.418975 4801 generic.go:334] "Generic (PLEG): container finished" podID="41094792-7d92-4816-8ce1-cda462529daf" containerID="a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c" exitCode=0 Dec 06 03:28:52 crc kubenswrapper[4801]: I1206 03:28:52.419633 4801 generic.go:334] "Generic (PLEG): container finished" podID="41094792-7d92-4816-8ce1-cda462529daf" containerID="f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d" exitCode=2 Dec 06 03:28:52 crc kubenswrapper[4801]: I1206 03:28:52.419197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerDied","Data":"a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c"} Dec 06 03:28:52 crc kubenswrapper[4801]: I1206 03:28:52.419715 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerDied","Data":"f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d"} Dec 06 03:28:53 crc kubenswrapper[4801]: I1206 03:28:53.223331 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4767c0d4-41d1-471f-aa45-52f092ca5191" path="/var/lib/kubelet/pods/4767c0d4-41d1-471f-aa45-52f092ca5191/volumes" Dec 06 03:28:54 crc kubenswrapper[4801]: I1206 03:28:54.440053 4801 generic.go:334] "Generic (PLEG): container finished" podID="1279875a-a29e-48df-9631-e248326cecfa" containerID="de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6" exitCode=0 Dec 06 03:28:54 crc kubenswrapper[4801]: I1206 03:28:54.440189 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" event={"ID":"1279875a-a29e-48df-9631-e248326cecfa","Type":"ContainerDied","Data":"de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6"} Dec 06 03:28:59 crc kubenswrapper[4801]: I1206 03:28:59.193299 4801 generic.go:334] "Generic (PLEG): container finished" podID="41094792-7d92-4816-8ce1-cda462529daf" containerID="8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8" exitCode=-1 Dec 06 03:28:59 crc kubenswrapper[4801]: I1206 03:28:59.193389 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerDied","Data":"8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8"} Dec 06 03:29:00 crc kubenswrapper[4801]: I1206 03:29:00.206460 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0179e86f-f9ff-4945-8870-7c54764ee77d","Type":"ContainerStarted","Data":"845f4b4b13d1939715500d12526f84992679f9f38ddafd7bca57b6317f5375ad"} Dec 06 03:29:00 crc kubenswrapper[4801]: I1206 03:29:00.208589 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-554d4f888f-vn47n" event={"ID":"87b90546-3593-40c2-9be7-84187756b4cf","Type":"ContainerStarted","Data":"021ef1c077ca7b6dd4016679b5a28a09029f49ca660a12081c06e266a73c38a4"} Dec 06 03:29:00 crc kubenswrapper[4801]: I1206 03:29:00.210226 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" event={"ID":"1279875a-a29e-48df-9631-e248326cecfa","Type":"ContainerStarted","Data":"8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843"} Dec 06 03:29:00 crc kubenswrapper[4801]: I1206 03:29:00.213198 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6fcb784-b9mbt" event={"ID":"d93d32ae-f984-4eac-9fdf-80479f40f4bb","Type":"ContainerStarted","Data":"ef83a78b23fc03233b49fa83bb7bf37edf125046cf6daeb132eaff1dae7fe869"} Dec 06 03:29:02 crc kubenswrapper[4801]: I1206 03:29:02.232857 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:29:02 crc kubenswrapper[4801]: I1206 03:29:02.257947 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-554d4f888f-vn47n" podStartSLOduration=21.257918897 podStartE2EDuration="21.257918897s" podCreationTimestamp="2025-12-06 03:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:02.256975281 +0000 UTC m=+1395.379582913" watchObservedRunningTime="2025-12-06 03:29:02.257918897 +0000 UTC m=+1395.380526509" Dec 06 03:29:03 crc kubenswrapper[4801]: I1206 03:29:03.239635 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:29:03 crc kubenswrapper[4801]: I1206 03:29:03.240183 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api-log" containerID="cri-o://ddbe97a3611e2dfcb8a3f3e8c7ff9c28f12c6687c5df85ff39476fa0419b2119" gracePeriod=30 Dec 06 03:29:03 crc kubenswrapper[4801]: I1206 03:29:03.240216 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api" containerID="cri-o://845f4b4b13d1939715500d12526f84992679f9f38ddafd7bca57b6317f5375ad" gracePeriod=30 Dec 06 03:29:03 crc kubenswrapper[4801]: I1206 03:29:03.259958 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" podStartSLOduration=25.259941786 podStartE2EDuration="25.259941786s" podCreationTimestamp="2025-12-06 03:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:03.255453263 +0000 UTC m=+1396.378060835" watchObservedRunningTime="2025-12-06 03:29:03.259941786 +0000 UTC m=+1396.382549358" Dec 06 03:29:03 crc kubenswrapper[4801]: I1206 03:29:03.277779 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74c6fcb784-b9mbt" podStartSLOduration=25.277736622 podStartE2EDuration="25.277736622s" podCreationTimestamp="2025-12-06 03:28:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:03.275850641 +0000 UTC m=+1396.398458233" watchObservedRunningTime="2025-12-06 03:29:03.277736622 +0000 UTC m=+1396.400344194" Dec 06 03:29:03 crc kubenswrapper[4801]: I1206 03:29:03.303782 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=28.303747014 podStartE2EDuration="28.303747014s" podCreationTimestamp="2025-12-06 03:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:03.296415684 +0000 UTC m=+1396.419023256" watchObservedRunningTime="2025-12-06 03:29:03.303747014 +0000 UTC m=+1396.426354586" Dec 06 03:29:04 crc kubenswrapper[4801]: I1206 03:29:04.253412 4801 generic.go:334] "Generic (PLEG): container finished" podID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerID="ddbe97a3611e2dfcb8a3f3e8c7ff9c28f12c6687c5df85ff39476fa0419b2119" exitCode=143 Dec 06 03:29:04 crc kubenswrapper[4801]: I1206 03:29:04.253612 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0179e86f-f9ff-4945-8870-7c54764ee77d","Type":"ContainerDied","Data":"ddbe97a3611e2dfcb8a3f3e8c7ff9c28f12c6687c5df85ff39476fa0419b2119"} Dec 06 03:29:05 crc kubenswrapper[4801]: I1206 03:29:05.263907 4801 generic.go:334] "Generic (PLEG): container finished" podID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerID="845f4b4b13d1939715500d12526f84992679f9f38ddafd7bca57b6317f5375ad" exitCode=0 Dec 06 03:29:05 crc kubenswrapper[4801]: I1206 03:29:05.263952 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0179e86f-f9ff-4945-8870-7c54764ee77d","Type":"ContainerDied","Data":"845f4b4b13d1939715500d12526f84992679f9f38ddafd7bca57b6317f5375ad"} Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.049010 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.611366 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.689654 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-cr4fx"] Dec 06 03:29:06 crc kubenswrapper[4801]: E1206 03:29:06.690016 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767c0d4-41d1-471f-aa45-52f092ca5191" containerName="init" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690034 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767c0d4-41d1-471f-aa45-52f092ca5191" containerName="init" Dec 06 03:29:06 crc kubenswrapper[4801]: E1206 03:29:06.690059 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690065 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api" Dec 06 03:29:06 crc kubenswrapper[4801]: E1206 03:29:06.690089 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api-log" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690097 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api-log" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690244 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4767c0d4-41d1-471f-aa45-52f092ca5191" containerName="init" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690257 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690268 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api-log" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.690829 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.711444 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cr4fx"] Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716448 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0179e86f-f9ff-4945-8870-7c54764ee77d-logs\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716552 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rv6gw\" (UniqueName: \"kubernetes.io/projected/0179e86f-f9ff-4945-8870-7c54764ee77d-kube-api-access-rv6gw\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716586 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data-custom\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716621 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-combined-ca-bundle\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716650 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-scripts\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.716677 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0179e86f-f9ff-4945-8870-7c54764ee77d-etc-machine-id\") pod \"0179e86f-f9ff-4945-8870-7c54764ee77d\" (UID: \"0179e86f-f9ff-4945-8870-7c54764ee77d\") " Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.717119 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0179e86f-f9ff-4945-8870-7c54764ee77d-logs" (OuterVolumeSpecName: "logs") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.717149 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0179e86f-f9ff-4945-8870-7c54764ee77d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.728165 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.735650 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-scripts" (OuterVolumeSpecName: "scripts") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.736935 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0179e86f-f9ff-4945-8870-7c54764ee77d-kube-api-access-rv6gw" (OuterVolumeSpecName: "kube-api-access-rv6gw") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "kube-api-access-rv6gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.774553 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.800894 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-kp8cl"] Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.802148 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:06 crc kubenswrapper[4801]: E1206 03:29:06.814518 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified" Dec 06 03:29:06 crc kubenswrapper[4801]: E1206 03:29:06.814743 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-scheduler,Image:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fdh5dfh695h7h684hd8hfch97h674h5fh54h58fh585h68h95hc7h5dh6h547h698h676h658h6fh9h54h644h594hdbh76h5fdh686h5d5q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cinder-scheduler-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v7zgn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*42407,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:12,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-scheduler-0_openstack(f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.816248 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2df8-account-create-update-w9l8q"] Dec 06 03:29:06 crc kubenswrapper[4801]: E1206 03:29:06.817167 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-scheduler\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"probe\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\"]" pod="openstack/cinder-scheduler-0" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.817569 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818213 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc374618-2dac-4256-9048-76b3774d35b8-operator-scripts\") pod \"nova-api-db-create-cr4fx\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818249 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcp4m\" (UniqueName: \"kubernetes.io/projected/fc374618-2dac-4256-9048-76b3774d35b8-kube-api-access-tcp4m\") pod \"nova-api-db-create-cr4fx\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818330 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0179e86f-f9ff-4945-8870-7c54764ee77d-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818343 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rv6gw\" (UniqueName: \"kubernetes.io/projected/0179e86f-f9ff-4945-8870-7c54764ee77d-kube-api-access-rv6gw\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818354 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818362 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818372 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.818380 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0179e86f-f9ff-4945-8870-7c54764ee77d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.824898 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.828841 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kp8cl"] Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.835033 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2df8-account-create-update-w9l8q"] Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.857129 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data" (OuterVolumeSpecName: "config-data") pod "0179e86f-f9ff-4945-8870-7c54764ee77d" (UID: "0179e86f-f9ff-4945-8870-7c54764ee77d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919666 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84x6\" (UniqueName: \"kubernetes.io/projected/80e58a01-f644-4664-8d9b-f7c22938e4aa-kube-api-access-n84x6\") pod \"nova-api-2df8-account-create-update-w9l8q\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919763 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc374618-2dac-4256-9048-76b3774d35b8-operator-scripts\") pod \"nova-api-db-create-cr4fx\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919795 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d727fb0f-a514-492e-9e91-df76ceccf42d-operator-scripts\") pod \"nova-cell0-db-create-kp8cl\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919820 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcp4m\" (UniqueName: \"kubernetes.io/projected/fc374618-2dac-4256-9048-76b3774d35b8-kube-api-access-tcp4m\") pod \"nova-api-db-create-cr4fx\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919858 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xg4\" (UniqueName: \"kubernetes.io/projected/d727fb0f-a514-492e-9e91-df76ceccf42d-kube-api-access-96xg4\") pod \"nova-cell0-db-create-kp8cl\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919880 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e58a01-f644-4664-8d9b-f7c22938e4aa-operator-scripts\") pod \"nova-api-2df8-account-create-update-w9l8q\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.919949 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0179e86f-f9ff-4945-8870-7c54764ee77d-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.920517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc374618-2dac-4256-9048-76b3774d35b8-operator-scripts\") pod \"nova-api-db-create-cr4fx\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.937734 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcp4m\" (UniqueName: \"kubernetes.io/projected/fc374618-2dac-4256-9048-76b3774d35b8-kube-api-access-tcp4m\") pod \"nova-api-db-create-cr4fx\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.982274 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bc6j9"] Dec 06 03:29:06 crc kubenswrapper[4801]: I1206 03:29:06.983359 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:06.999558 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-be62-account-create-update-pxz7k"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.000672 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.006497 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.006669 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.011475 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be62-account-create-update-pxz7k"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.015989 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039143 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n84x6\" (UniqueName: \"kubernetes.io/projected/80e58a01-f644-4664-8d9b-f7c22938e4aa-kube-api-access-n84x6\") pod \"nova-api-2df8-account-create-update-w9l8q\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039237 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3655d081-5002-4403-869e-e027935e4f0b-operator-scripts\") pod \"nova-cell0-be62-account-create-update-pxz7k\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039264 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkhq\" (UniqueName: \"kubernetes.io/projected/3655d081-5002-4403-869e-e027935e4f0b-kube-api-access-pgkhq\") pod \"nova-cell0-be62-account-create-update-pxz7k\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039321 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d727fb0f-a514-492e-9e91-df76ceccf42d-operator-scripts\") pod \"nova-cell0-db-create-kp8cl\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039354 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec233495-e3c7-4268-8eb9-532e73143533-operator-scripts\") pod \"nova-cell1-db-create-bc6j9\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039425 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96xg4\" (UniqueName: \"kubernetes.io/projected/d727fb0f-a514-492e-9e91-df76ceccf42d-kube-api-access-96xg4\") pod \"nova-cell0-db-create-kp8cl\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039502 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e58a01-f644-4664-8d9b-f7c22938e4aa-operator-scripts\") pod \"nova-api-2df8-account-create-update-w9l8q\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.039562 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdcv\" (UniqueName: \"kubernetes.io/projected/ec233495-e3c7-4268-8eb9-532e73143533-kube-api-access-5vdcv\") pod \"nova-cell1-db-create-bc6j9\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.040584 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d727fb0f-a514-492e-9e91-df76ceccf42d-operator-scripts\") pod \"nova-cell0-db-create-kp8cl\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.042624 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e58a01-f644-4664-8d9b-f7c22938e4aa-operator-scripts\") pod \"nova-api-2df8-account-create-update-w9l8q\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.085958 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96xg4\" (UniqueName: \"kubernetes.io/projected/d727fb0f-a514-492e-9e91-df76ceccf42d-kube-api-access-96xg4\") pod \"nova-cell0-db-create-kp8cl\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.103583 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n84x6\" (UniqueName: \"kubernetes.io/projected/80e58a01-f644-4664-8d9b-f7c22938e4aa-kube-api-access-n84x6\") pod \"nova-api-2df8-account-create-update-w9l8q\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.116811 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bc6j9"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142537 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-run-httpd\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142618 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-scripts\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142675 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-combined-ca-bundle\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142744 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-log-httpd\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142789 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmb6k\" (UniqueName: \"kubernetes.io/projected/41094792-7d92-4816-8ce1-cda462529daf-kube-api-access-wmb6k\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142859 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-config-data\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.142966 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-sg-core-conf-yaml\") pod \"41094792-7d92-4816-8ce1-cda462529daf\" (UID: \"41094792-7d92-4816-8ce1-cda462529daf\") " Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.143185 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec233495-e3c7-4268-8eb9-532e73143533-operator-scripts\") pod \"nova-cell1-db-create-bc6j9\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.143312 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdcv\" (UniqueName: \"kubernetes.io/projected/ec233495-e3c7-4268-8eb9-532e73143533-kube-api-access-5vdcv\") pod \"nova-cell1-db-create-bc6j9\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.143402 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3655d081-5002-4403-869e-e027935e4f0b-operator-scripts\") pod \"nova-cell0-be62-account-create-update-pxz7k\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.143430 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgkhq\" (UniqueName: \"kubernetes.io/projected/3655d081-5002-4403-869e-e027935e4f0b-kube-api-access-pgkhq\") pod \"nova-cell0-be62-account-create-update-pxz7k\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.143831 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.144667 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec233495-e3c7-4268-8eb9-532e73143533-operator-scripts\") pod \"nova-cell1-db-create-bc6j9\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.150417 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.150892 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3655d081-5002-4403-869e-e027935e4f0b-operator-scripts\") pod \"nova-cell0-be62-account-create-update-pxz7k\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.153203 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41094792-7d92-4816-8ce1-cda462529daf-kube-api-access-wmb6k" (OuterVolumeSpecName: "kube-api-access-wmb6k") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "kube-api-access-wmb6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.153542 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-scripts" (OuterVolumeSpecName: "scripts") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.172362 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgkhq\" (UniqueName: \"kubernetes.io/projected/3655d081-5002-4403-869e-e027935e4f0b-kube-api-access-pgkhq\") pod \"nova-cell0-be62-account-create-update-pxz7k\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.172480 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdcv\" (UniqueName: \"kubernetes.io/projected/ec233495-e3c7-4268-8eb9-532e73143533-kube-api-access-5vdcv\") pod \"nova-cell1-db-create-bc6j9\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.192326 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.193791 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-59fe-account-create-update-zp867"] Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.194200 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="sg-core" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194219 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="sg-core" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.194235 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-notification-agent" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194246 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-notification-agent" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.194263 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-central-agent" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194270 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-central-agent" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.194282 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="proxy-httpd" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194289 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="proxy-httpd" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194470 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="proxy-httpd" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194490 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="sg-core" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194499 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-notification-agent" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.194521 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="41094792-7d92-4816-8ce1-cda462529daf" containerName="ceilometer-central-agent" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.195110 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.198695 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.201169 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.202443 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.242153 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-59fe-account-create-update-zp867"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.245841 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03c3261-5b37-43cb-8148-a9e709c13a1e-operator-scripts\") pod \"nova-cell1-59fe-account-create-update-zp867\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.245933 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdv6x\" (UniqueName: \"kubernetes.io/projected/b03c3261-5b37-43cb-8148-a9e709c13a1e-kube-api-access-wdv6x\") pod \"nova-cell1-59fe-account-create-update-zp867\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.246022 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.246039 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.246047 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.246058 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/41094792-7d92-4816-8ce1-cda462529daf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.246066 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmb6k\" (UniqueName: \"kubernetes.io/projected/41094792-7d92-4816-8ce1-cda462529daf-kube-api-access-wmb6k\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.294018 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-config-data" (OuterVolumeSpecName: "config-data") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.295161 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41094792-7d92-4816-8ce1-cda462529daf" (UID: "41094792-7d92-4816-8ce1-cda462529daf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.309667 4801 generic.go:334] "Generic (PLEG): container finished" podID="41094792-7d92-4816-8ce1-cda462529daf" containerID="beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3" exitCode=0 Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.309744 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerDied","Data":"beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3"} Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.309790 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"41094792-7d92-4816-8ce1-cda462529daf","Type":"ContainerDied","Data":"4033288a4768fc6677773b7ac03540f1e1b07cc6d0efadde80be41f8e4184d92"} Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.309807 4801 scope.go:117] "RemoveContainer" containerID="a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.309944 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.310695 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.320397 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.322190 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0179e86f-f9ff-4945-8870-7c54764ee77d","Type":"ContainerDied","Data":"3dddd30f84ffd001702416ec963b3ea03efbfd8f335c10a202a75feac4db9788"} Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.338842 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.349628 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdv6x\" (UniqueName: \"kubernetes.io/projected/b03c3261-5b37-43cb-8148-a9e709c13a1e-kube-api-access-wdv6x\") pod \"nova-cell1-59fe-account-create-update-zp867\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.349904 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03c3261-5b37-43cb-8148-a9e709c13a1e-operator-scripts\") pod \"nova-cell1-59fe-account-create-update-zp867\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.349993 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.350032 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41094792-7d92-4816-8ce1-cda462529daf-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.350921 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03c3261-5b37-43cb-8148-a9e709c13a1e-operator-scripts\") pod \"nova-cell1-59fe-account-create-update-zp867\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.385807 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"cinder-scheduler\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\", failed to \"StartContainer\" for \"probe\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified\\\"\"]" pod="openstack/cinder-scheduler-0" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.386713 4801 scope.go:117] "RemoveContainer" containerID="f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.414439 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdv6x\" (UniqueName: \"kubernetes.io/projected/b03c3261-5b37-43cb-8148-a9e709c13a1e-kube-api-access-wdv6x\") pod \"nova-cell1-59fe-account-create-update-zp867\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.449100 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.471097 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.483072 4801 scope.go:117] "RemoveContainer" containerID="beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.488194 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-cr4fx"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.520786 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.551872 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.552510 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.577895 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.580589 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.584180 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.584404 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.608272 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.610307 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.615194 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.615341 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.627359 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.633012 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.647117 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655721 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f92f28-d85e-47ea-a585-a67fb86a540f-logs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655789 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655824 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655842 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpxw\" (UniqueName: \"kubernetes.io/projected/8af9567c-7fd3-4a06-8a78-2acc56974fe7-kube-api-access-bgpxw\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655881 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655964 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.655993 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-scripts\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656043 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09f92f28-d85e-47ea-a585-a67fb86a540f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656059 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-config-data\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656077 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-scripts\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656100 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656177 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vllhr\" (UniqueName: \"kubernetes.io/projected/09f92f28-d85e-47ea-a585-a67fb86a540f-kube-api-access-vllhr\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656256 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656296 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-config-data\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.656326 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-config-data-custom\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.756183 4801 scope.go:117] "RemoveContainer" containerID="8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758306 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758346 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-config-data\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758369 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-config-data-custom\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f92f28-d85e-47ea-a585-a67fb86a540f-logs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758469 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758485 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpxw\" (UniqueName: \"kubernetes.io/projected/8af9567c-7fd3-4a06-8a78-2acc56974fe7-kube-api-access-bgpxw\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758517 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758535 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758553 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758570 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-scripts\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-config-data\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758610 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09f92f28-d85e-47ea-a585-a67fb86a540f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758627 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-scripts\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758643 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.758673 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vllhr\" (UniqueName: \"kubernetes.io/projected/09f92f28-d85e-47ea-a585-a67fb86a540f-kube-api-access-vllhr\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.763260 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-config-data-custom\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.763610 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-log-httpd\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.766700 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f92f28-d85e-47ea-a585-a67fb86a540f-logs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.768002 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/09f92f28-d85e-47ea-a585-a67fb86a540f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.769055 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.769959 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-run-httpd\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.779943 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-scripts\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.780243 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.784212 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-scripts\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.786505 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.786743 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-config-data\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.787048 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.787415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09f92f28-d85e-47ea-a585-a67fb86a540f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.788409 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vllhr\" (UniqueName: \"kubernetes.io/projected/09f92f28-d85e-47ea-a585-a67fb86a540f-kube-api-access-vllhr\") pod \"cinder-api-0\" (UID: \"09f92f28-d85e-47ea-a585-a67fb86a540f\") " pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.807004 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-config-data\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.818893 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpxw\" (UniqueName: \"kubernetes.io/projected/8af9567c-7fd3-4a06-8a78-2acc56974fe7-kube-api-access-bgpxw\") pod \"ceilometer-0\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.886120 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-kp8cl"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.894822 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2df8-account-create-update-w9l8q"] Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.912764 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.931712 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.935775 4801 scope.go:117] "RemoveContainer" containerID="a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.944368 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c\": container with ID starting with a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c not found: ID does not exist" containerID="a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.944410 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c"} err="failed to get container status \"a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c\": rpc error: code = NotFound desc = could not find container \"a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c\": container with ID starting with a1c6bf59d710276b37c2a37715c680f43191486bf451a11fe529481c39494a0c not found: ID does not exist" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.944440 4801 scope.go:117] "RemoveContainer" containerID="f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.950897 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d\": container with ID starting with f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d not found: ID does not exist" containerID="f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.950933 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d"} err="failed to get container status \"f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d\": rpc error: code = NotFound desc = could not find container \"f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d\": container with ID starting with f95e8b6bad2eaf0b56741174cd08c33e2239878018723bcd5bbd202a704d1e8d not found: ID does not exist" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.950955 4801 scope.go:117] "RemoveContainer" containerID="beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.953240 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3\": container with ID starting with beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3 not found: ID does not exist" containerID="beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.953266 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3"} err="failed to get container status \"beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3\": rpc error: code = NotFound desc = could not find container \"beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3\": container with ID starting with beb5b6c2c7b9ad239e6c876197a1483fc2396da3435a43cb4aed31457a77dce3 not found: ID does not exist" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.953282 4801 scope.go:117] "RemoveContainer" containerID="8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8" Dec 06 03:29:07 crc kubenswrapper[4801]: E1206 03:29:07.959705 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8\": container with ID starting with 8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8 not found: ID does not exist" containerID="8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.959736 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8"} err="failed to get container status \"8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8\": rpc error: code = NotFound desc = could not find container \"8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8\": container with ID starting with 8af7082e6c3beaeeee15442b7ca94408969bd1815a3997352ba7f32d8a6140b8 not found: ID does not exist" Dec 06 03:29:07 crc kubenswrapper[4801]: I1206 03:29:07.959768 4801 scope.go:117] "RemoveContainer" containerID="845f4b4b13d1939715500d12526f84992679f9f38ddafd7bca57b6317f5375ad" Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.019689 4801 scope.go:117] "RemoveContainer" containerID="ddbe97a3611e2dfcb8a3f3e8c7ff9c28f12c6687c5df85ff39476fa0419b2119" Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.121834 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be62-account-create-update-pxz7k"] Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.204473 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bc6j9"] Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.371945 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-59fe-account-create-update-zp867"] Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.381581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2df8-account-create-update-w9l8q" event={"ID":"80e58a01-f644-4664-8d9b-f7c22938e4aa","Type":"ContainerStarted","Data":"3783c4e8887ba61b0d8cd832dcc344f03fc6684dd996e06360e9fdbe9cc5aeb6"} Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.384274 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc6j9" event={"ID":"ec233495-e3c7-4268-8eb9-532e73143533","Type":"ContainerStarted","Data":"b42cf061736d4aad5c7fb4ec3f6757cf72e3f297bab01bc776482b78f7a27c2a"} Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.385453 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" event={"ID":"3655d081-5002-4403-869e-e027935e4f0b","Type":"ContainerStarted","Data":"a65ba33a0ad20d5a33bdbfbd33a1774b5a888aff8d1e039ad4fad9985625cd9e"} Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.386208 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kp8cl" event={"ID":"d727fb0f-a514-492e-9e91-df76ceccf42d","Type":"ContainerStarted","Data":"1d2fd123d3fd1994ef0cbaca5c638365d7921816e930b078b26cee8e47e61a3b"} Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.387009 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cr4fx" event={"ID":"fc374618-2dac-4256-9048-76b3774d35b8","Type":"ContainerStarted","Data":"02f22402b8f1d61a8a60eef978094928bb8f5f834ca4eca302144bda05b5e6f5"} Dec 06 03:29:08 crc kubenswrapper[4801]: W1206 03:29:08.413447 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb03c3261_5b37_43cb_8148_a9e709c13a1e.slice/crio-e33d2b195e5d9b71d644d9c0b4f42325d365ab6eb7d38f9cd683ca1869624be5 WatchSource:0}: Error finding container e33d2b195e5d9b71d644d9c0b4f42325d365ab6eb7d38f9cd683ca1869624be5: Status 404 returned error can't find the container with id e33d2b195e5d9b71d644d9c0b4f42325d365ab6eb7d38f9cd683ca1869624be5 Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.633058 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:08 crc kubenswrapper[4801]: W1206 03:29:08.641785 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8af9567c_7fd3_4a06_8a78_2acc56974fe7.slice/crio-95054bf3388626baceabc0bc9b011645a07ade75a30fb5bd78ad29c08ee2cf80 WatchSource:0}: Error finding container 95054bf3388626baceabc0bc9b011645a07ade75a30fb5bd78ad29c08ee2cf80: Status 404 returned error can't find the container with id 95054bf3388626baceabc0bc9b011645a07ade75a30fb5bd78ad29c08ee2cf80 Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.824260 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 03:29:08 crc kubenswrapper[4801]: I1206 03:29:08.918939 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.001735 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-d4ndf"] Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.001987 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="dnsmasq-dns" containerID="cri-o://59da8b3ed7a8130f87344b2d9fc497b9866bd883d3a4263cdfe64c35c0e5226a" gracePeriod=10 Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.049734 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.059034 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-74c6fcb784-b9mbt" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.059646 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-74c6fcb784-b9mbt" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.059790 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-74c6fcb784-b9mbt" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.061926 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-74c6fcb784-b9mbt" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.229017 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" path="/var/lib/kubelet/pods/0179e86f-f9ff-4945-8870-7c54764ee77d/volumes" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.230124 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41094792-7d92-4816-8ce1-cda462529daf" path="/var/lib/kubelet/pods/41094792-7d92-4816-8ce1-cda462529daf/volumes" Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.404557 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09f92f28-d85e-47ea-a585-a67fb86a540f","Type":"ContainerStarted","Data":"ccb5a307806fed4d64b8f84e9202a0df7731d33d56b466a8df86f0b5af28beeb"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.411228 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2df8-account-create-update-w9l8q" event={"ID":"80e58a01-f644-4664-8d9b-f7c22938e4aa","Type":"ContainerStarted","Data":"8cf1887b8275b3cb968c161a9697a9fddb6b57fb7b4ea9f0e06b4576687a7c37"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.416221 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc6j9" event={"ID":"ec233495-e3c7-4268-8eb9-532e73143533","Type":"ContainerStarted","Data":"3b3be67890a1583b30d351a35d619c24a8bc1cfd74cc1985eaa99468a4ee1904"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.420876 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59fe-account-create-update-zp867" event={"ID":"b03c3261-5b37-43cb-8148-a9e709c13a1e","Type":"ContainerStarted","Data":"bd01a7222b8a1d54818206a0aff6ac828bb3c5b9bd07f5471d12ccb1f6f06d07"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.420938 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59fe-account-create-update-zp867" event={"ID":"b03c3261-5b37-43cb-8148-a9e709c13a1e","Type":"ContainerStarted","Data":"e33d2b195e5d9b71d644d9c0b4f42325d365ab6eb7d38f9cd683ca1869624be5"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.423282 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerStarted","Data":"95054bf3388626baceabc0bc9b011645a07ade75a30fb5bd78ad29c08ee2cf80"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.427476 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" event={"ID":"3655d081-5002-4403-869e-e027935e4f0b","Type":"ContainerStarted","Data":"3d05b0fee21513e52b9337a18e940c9bf240916f4dd9ea09c1e9127034c69531"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.429576 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kp8cl" event={"ID":"d727fb0f-a514-492e-9e91-df76ceccf42d","Type":"ContainerStarted","Data":"9bf96e20d2bc4a9fce320775cbbcf65124f692b13a62efc9d077fc043b695dd0"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.432847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cr4fx" event={"ID":"fc374618-2dac-4256-9048-76b3774d35b8","Type":"ContainerStarted","Data":"11a8202632d51f834f18df9d67cc3ddb61aa2203ef31bb28adad5818a2d887a8"} Dec 06 03:29:09 crc kubenswrapper[4801]: I1206 03:29:09.453151 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" podStartSLOduration=3.453130852 podStartE2EDuration="3.453130852s" podCreationTimestamp="2025-12-06 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:09.441591097 +0000 UTC m=+1402.564198679" watchObservedRunningTime="2025-12-06 03:29:09.453130852 +0000 UTC m=+1402.575738444" Dec 06 03:29:10 crc kubenswrapper[4801]: I1206 03:29:10.443170 4801 generic.go:334] "Generic (PLEG): container finished" podID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerID="59da8b3ed7a8130f87344b2d9fc497b9866bd883d3a4263cdfe64c35c0e5226a" exitCode=0 Dec 06 03:29:10 crc kubenswrapper[4801]: I1206 03:29:10.443226 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" event={"ID":"065ef35f-50b6-4eb5-b46c-961b40e0e29f","Type":"ContainerDied","Data":"59da8b3ed7a8130f87344b2d9fc497b9866bd883d3a4263cdfe64c35c0e5226a"} Dec 06 03:29:11 crc kubenswrapper[4801]: I1206 03:29:11.049275 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="0179e86f-f9ff-4945-8870-7c54764ee77d" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.148:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:29:11 crc kubenswrapper[4801]: I1206 03:29:11.170194 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:29:11 crc kubenswrapper[4801]: I1206 03:29:11.170279 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:29:11 crc kubenswrapper[4801]: I1206 03:29:11.170343 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:29:11 crc kubenswrapper[4801]: I1206 03:29:11.171355 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa4e1856c226fd52059b0fd49c8e200b1d6679f042be9b39be0d4c3a479e34b9"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:29:11 crc kubenswrapper[4801]: I1206 03:29:11.171433 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://fa4e1856c226fd52059b0fd49c8e200b1d6679f042be9b39be0d4c3a479e34b9" gracePeriod=600 Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.204301 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-554d4f888f-vn47n" podUID="87b90546-3593-40c2-9be7-84187756b4cf" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.205049 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-554d4f888f-vn47n" podUID="87b90546-3593-40c2-9be7-84187756b4cf" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.205471 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-554d4f888f-vn47n" podUID="87b90546-3593-40c2-9be7-84187756b4cf" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.480153 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09f92f28-d85e-47ea-a585-a67fb86a540f","Type":"ContainerStarted","Data":"a556d183acadc327a116630f1d5b8279c4fc7135336d906cb707ada31eaf9d51"} Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.502568 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-kp8cl" podStartSLOduration=6.502549265 podStartE2EDuration="6.502549265s" podCreationTimestamp="2025-12-06 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:12.496451997 +0000 UTC m=+1405.619059569" watchObservedRunningTime="2025-12-06 03:29:12.502549265 +0000 UTC m=+1405.625156837" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.518031 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-cr4fx" podStartSLOduration=6.518006517 podStartE2EDuration="6.518006517s" podCreationTimestamp="2025-12-06 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:12.51556684 +0000 UTC m=+1405.638174412" watchObservedRunningTime="2025-12-06 03:29:12.518006517 +0000 UTC m=+1405.640614089" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.539872 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.543417 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bc6j9" podStartSLOduration=6.543387501 podStartE2EDuration="6.543387501s" podCreationTimestamp="2025-12-06 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:12.535739402 +0000 UTC m=+1405.658347004" watchObservedRunningTime="2025-12-06 03:29:12.543387501 +0000 UTC m=+1405.665995073" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.557965 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2df8-account-create-update-w9l8q" podStartSLOduration=6.557937489 podStartE2EDuration="6.557937489s" podCreationTimestamp="2025-12-06 03:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:12.551936905 +0000 UTC m=+1405.674544497" watchObservedRunningTime="2025-12-06 03:29:12.557937489 +0000 UTC m=+1405.680545061" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.586265 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-59fe-account-create-update-zp867" podStartSLOduration=5.586235363 podStartE2EDuration="5.586235363s" podCreationTimestamp="2025-12-06 03:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:12.573289099 +0000 UTC m=+1405.695896671" watchObservedRunningTime="2025-12-06 03:29:12.586235363 +0000 UTC m=+1405.708842945" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.837081 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.968003 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-config\") pod \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.968557 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-dns-svc\") pod \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.968834 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-nb\") pod \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.968926 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh4dv\" (UniqueName: \"kubernetes.io/projected/065ef35f-50b6-4eb5-b46c-961b40e0e29f-kube-api-access-vh4dv\") pod \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.969014 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-sb\") pod \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\" (UID: \"065ef35f-50b6-4eb5-b46c-961b40e0e29f\") " Dec 06 03:29:12 crc kubenswrapper[4801]: I1206 03:29:12.977071 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065ef35f-50b6-4eb5-b46c-961b40e0e29f-kube-api-access-vh4dv" (OuterVolumeSpecName: "kube-api-access-vh4dv") pod "065ef35f-50b6-4eb5-b46c-961b40e0e29f" (UID: "065ef35f-50b6-4eb5-b46c-961b40e0e29f"). InnerVolumeSpecName "kube-api-access-vh4dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.067150 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "065ef35f-50b6-4eb5-b46c-961b40e0e29f" (UID: "065ef35f-50b6-4eb5-b46c-961b40e0e29f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.071267 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh4dv\" (UniqueName: \"kubernetes.io/projected/065ef35f-50b6-4eb5-b46c-961b40e0e29f-kube-api-access-vh4dv\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.071301 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.072413 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-config" (OuterVolumeSpecName: "config") pod "065ef35f-50b6-4eb5-b46c-961b40e0e29f" (UID: "065ef35f-50b6-4eb5-b46c-961b40e0e29f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.079305 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "065ef35f-50b6-4eb5-b46c-961b40e0e29f" (UID: "065ef35f-50b6-4eb5-b46c-961b40e0e29f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.085719 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "065ef35f-50b6-4eb5-b46c-961b40e0e29f" (UID: "065ef35f-50b6-4eb5-b46c-961b40e0e29f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.173095 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.173145 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.173155 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/065ef35f-50b6-4eb5-b46c-961b40e0e29f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.492121 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="fa4e1856c226fd52059b0fd49c8e200b1d6679f042be9b39be0d4c3a479e34b9" exitCode=0 Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.492194 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"fa4e1856c226fd52059b0fd49c8e200b1d6679f042be9b39be0d4c3a479e34b9"} Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.492247 4801 scope.go:117] "RemoveContainer" containerID="ff81fd67675c4763c098dcc0a53f067a4ce5fbfac499868e5be530bd2f0ce8c0" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.502094 4801 generic.go:334] "Generic (PLEG): container finished" podID="ec233495-e3c7-4268-8eb9-532e73143533" containerID="3b3be67890a1583b30d351a35d619c24a8bc1cfd74cc1985eaa99468a4ee1904" exitCode=0 Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.502180 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc6j9" event={"ID":"ec233495-e3c7-4268-8eb9-532e73143533","Type":"ContainerDied","Data":"3b3be67890a1583b30d351a35d619c24a8bc1cfd74cc1985eaa99468a4ee1904"} Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.504642 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" event={"ID":"065ef35f-50b6-4eb5-b46c-961b40e0e29f","Type":"ContainerDied","Data":"d33d4ace74987b2b231068dae074d84f1540aca229842df9b69d077a47495229"} Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.504725 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-d4ndf" Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.554305 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-d4ndf"] Dec 06 03:29:13 crc kubenswrapper[4801]: I1206 03:29:13.561579 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-d4ndf"] Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.117864 4801 scope.go:117] "RemoveContainer" containerID="59da8b3ed7a8130f87344b2d9fc497b9866bd883d3a4263cdfe64c35c0e5226a" Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.148821 4801 scope.go:117] "RemoveContainer" containerID="f0c9254c8a46214b67f0c0e1bddac22016c28efdd6bacdde200e489a6efb21d1" Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.515163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28"} Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.517746 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"09f92f28-d85e-47ea-a585-a67fb86a540f","Type":"ContainerStarted","Data":"ca0983bfc086ba166a7554ba82422d0c004324277d1de1f30c8a20b6d693826e"} Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.518072 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.520956 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerStarted","Data":"137fc6182fdfdd0b00730cd6ccd69179255145a07af4a7dad6bf05c2c71b7594"} Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.551031 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.551012127 podStartE2EDuration="7.551012127s" podCreationTimestamp="2025-12-06 03:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:14.549422494 +0000 UTC m=+1407.672030086" watchObservedRunningTime="2025-12-06 03:29:14.551012127 +0000 UTC m=+1407.673619699" Dec 06 03:29:14 crc kubenswrapper[4801]: I1206 03:29:14.879222 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.029111 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec233495-e3c7-4268-8eb9-532e73143533-operator-scripts\") pod \"ec233495-e3c7-4268-8eb9-532e73143533\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.029275 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vdcv\" (UniqueName: \"kubernetes.io/projected/ec233495-e3c7-4268-8eb9-532e73143533-kube-api-access-5vdcv\") pod \"ec233495-e3c7-4268-8eb9-532e73143533\" (UID: \"ec233495-e3c7-4268-8eb9-532e73143533\") " Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.031094 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec233495-e3c7-4268-8eb9-532e73143533-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec233495-e3c7-4268-8eb9-532e73143533" (UID: "ec233495-e3c7-4268-8eb9-532e73143533"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.035977 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec233495-e3c7-4268-8eb9-532e73143533-kube-api-access-5vdcv" (OuterVolumeSpecName: "kube-api-access-5vdcv") pod "ec233495-e3c7-4268-8eb9-532e73143533" (UID: "ec233495-e3c7-4268-8eb9-532e73143533"). InnerVolumeSpecName "kube-api-access-5vdcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.131130 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vdcv\" (UniqueName: \"kubernetes.io/projected/ec233495-e3c7-4268-8eb9-532e73143533-kube-api-access-5vdcv\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.131160 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec233495-e3c7-4268-8eb9-532e73143533-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.223270 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" path="/var/lib/kubelet/pods/065ef35f-50b6-4eb5-b46c-961b40e0e29f/volumes" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.543674 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bc6j9" Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.544441 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bc6j9" event={"ID":"ec233495-e3c7-4268-8eb9-532e73143533","Type":"ContainerDied","Data":"b42cf061736d4aad5c7fb4ec3f6757cf72e3f297bab01bc776482b78f7a27c2a"} Dec 06 03:29:15 crc kubenswrapper[4801]: I1206 03:29:15.544480 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42cf061736d4aad5c7fb4ec3f6757cf72e3f297bab01bc776482b78f7a27c2a" Dec 06 03:29:16 crc kubenswrapper[4801]: I1206 03:29:16.557234 4801 generic.go:334] "Generic (PLEG): container finished" podID="b03c3261-5b37-43cb-8148-a9e709c13a1e" containerID="bd01a7222b8a1d54818206a0aff6ac828bb3c5b9bd07f5471d12ccb1f6f06d07" exitCode=0 Dec 06 03:29:16 crc kubenswrapper[4801]: I1206 03:29:16.557280 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59fe-account-create-update-zp867" event={"ID":"b03c3261-5b37-43cb-8148-a9e709c13a1e","Type":"ContainerDied","Data":"bd01a7222b8a1d54818206a0aff6ac828bb3c5b9bd07f5471d12ccb1f6f06d07"} Dec 06 03:29:17 crc kubenswrapper[4801]: I1206 03:29:17.566507 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerStarted","Data":"7dfa3cba33ed581b1e3409051a3870e52726c2e705e26369b5ae71325ca2db93"} Dec 06 03:29:17 crc kubenswrapper[4801]: I1206 03:29:17.568292 4801 generic.go:334] "Generic (PLEG): container finished" podID="d727fb0f-a514-492e-9e91-df76ceccf42d" containerID="9bf96e20d2bc4a9fce320775cbbcf65124f692b13a62efc9d077fc043b695dd0" exitCode=0 Dec 06 03:29:17 crc kubenswrapper[4801]: I1206 03:29:17.568335 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kp8cl" event={"ID":"d727fb0f-a514-492e-9e91-df76ceccf42d","Type":"ContainerDied","Data":"9bf96e20d2bc4a9fce320775cbbcf65124f692b13a62efc9d077fc043b695dd0"} Dec 06 03:29:17 crc kubenswrapper[4801]: I1206 03:29:17.570068 4801 generic.go:334] "Generic (PLEG): container finished" podID="fc374618-2dac-4256-9048-76b3774d35b8" containerID="11a8202632d51f834f18df9d67cc3ddb61aa2203ef31bb28adad5818a2d887a8" exitCode=0 Dec 06 03:29:17 crc kubenswrapper[4801]: I1206 03:29:17.570130 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cr4fx" event={"ID":"fc374618-2dac-4256-9048-76b3774d35b8","Type":"ContainerDied","Data":"11a8202632d51f834f18df9d67cc3ddb61aa2203ef31bb28adad5818a2d887a8"} Dec 06 03:29:17 crc kubenswrapper[4801]: I1206 03:29:17.957163 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.092735 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdv6x\" (UniqueName: \"kubernetes.io/projected/b03c3261-5b37-43cb-8148-a9e709c13a1e-kube-api-access-wdv6x\") pod \"b03c3261-5b37-43cb-8148-a9e709c13a1e\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.093217 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03c3261-5b37-43cb-8148-a9e709c13a1e-operator-scripts\") pod \"b03c3261-5b37-43cb-8148-a9e709c13a1e\" (UID: \"b03c3261-5b37-43cb-8148-a9e709c13a1e\") " Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.094376 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b03c3261-5b37-43cb-8148-a9e709c13a1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b03c3261-5b37-43cb-8148-a9e709c13a1e" (UID: "b03c3261-5b37-43cb-8148-a9e709c13a1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.101569 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b03c3261-5b37-43cb-8148-a9e709c13a1e-kube-api-access-wdv6x" (OuterVolumeSpecName: "kube-api-access-wdv6x") pod "b03c3261-5b37-43cb-8148-a9e709c13a1e" (UID: "b03c3261-5b37-43cb-8148-a9e709c13a1e"). InnerVolumeSpecName "kube-api-access-wdv6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.196723 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdv6x\" (UniqueName: \"kubernetes.io/projected/b03c3261-5b37-43cb-8148-a9e709c13a1e-kube-api-access-wdv6x\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.196808 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b03c3261-5b37-43cb-8148-a9e709c13a1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.578795 4801 generic.go:334] "Generic (PLEG): container finished" podID="3655d081-5002-4403-869e-e027935e4f0b" containerID="3d05b0fee21513e52b9337a18e940c9bf240916f4dd9ea09c1e9127034c69531" exitCode=0 Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.578865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" event={"ID":"3655d081-5002-4403-869e-e027935e4f0b","Type":"ContainerDied","Data":"3d05b0fee21513e52b9337a18e940c9bf240916f4dd9ea09c1e9127034c69531"} Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.581213 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-59fe-account-create-update-zp867" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.586126 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-59fe-account-create-update-zp867" event={"ID":"b03c3261-5b37-43cb-8148-a9e709c13a1e","Type":"ContainerDied","Data":"e33d2b195e5d9b71d644d9c0b4f42325d365ab6eb7d38f9cd683ca1869624be5"} Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.586156 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e33d2b195e5d9b71d644d9c0b4f42325d365ab6eb7d38f9cd683ca1869624be5" Dec 06 03:29:18 crc kubenswrapper[4801]: I1206 03:29:18.999381 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.005960 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.011219 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcp4m\" (UniqueName: \"kubernetes.io/projected/fc374618-2dac-4256-9048-76b3774d35b8-kube-api-access-tcp4m\") pod \"fc374618-2dac-4256-9048-76b3774d35b8\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.011282 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96xg4\" (UniqueName: \"kubernetes.io/projected/d727fb0f-a514-492e-9e91-df76ceccf42d-kube-api-access-96xg4\") pod \"d727fb0f-a514-492e-9e91-df76ceccf42d\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.018645 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc374618-2dac-4256-9048-76b3774d35b8-kube-api-access-tcp4m" (OuterVolumeSpecName: "kube-api-access-tcp4m") pod "fc374618-2dac-4256-9048-76b3774d35b8" (UID: "fc374618-2dac-4256-9048-76b3774d35b8"). InnerVolumeSpecName "kube-api-access-tcp4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.018786 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d727fb0f-a514-492e-9e91-df76ceccf42d-kube-api-access-96xg4" (OuterVolumeSpecName: "kube-api-access-96xg4") pod "d727fb0f-a514-492e-9e91-df76ceccf42d" (UID: "d727fb0f-a514-492e-9e91-df76ceccf42d"). InnerVolumeSpecName "kube-api-access-96xg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.113274 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc374618-2dac-4256-9048-76b3774d35b8-operator-scripts\") pod \"fc374618-2dac-4256-9048-76b3774d35b8\" (UID: \"fc374618-2dac-4256-9048-76b3774d35b8\") " Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.113366 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d727fb0f-a514-492e-9e91-df76ceccf42d-operator-scripts\") pod \"d727fb0f-a514-492e-9e91-df76ceccf42d\" (UID: \"d727fb0f-a514-492e-9e91-df76ceccf42d\") " Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.113740 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcp4m\" (UniqueName: \"kubernetes.io/projected/fc374618-2dac-4256-9048-76b3774d35b8-kube-api-access-tcp4m\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.113780 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96xg4\" (UniqueName: \"kubernetes.io/projected/d727fb0f-a514-492e-9e91-df76ceccf42d-kube-api-access-96xg4\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.114694 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d727fb0f-a514-492e-9e91-df76ceccf42d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d727fb0f-a514-492e-9e91-df76ceccf42d" (UID: "d727fb0f-a514-492e-9e91-df76ceccf42d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.114876 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc374618-2dac-4256-9048-76b3774d35b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fc374618-2dac-4256-9048-76b3774d35b8" (UID: "fc374618-2dac-4256-9048-76b3774d35b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.217020 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc374618-2dac-4256-9048-76b3774d35b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.217046 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d727fb0f-a514-492e-9e91-df76ceccf42d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.591219 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-kp8cl" event={"ID":"d727fb0f-a514-492e-9e91-df76ceccf42d","Type":"ContainerDied","Data":"1d2fd123d3fd1994ef0cbaca5c638365d7921816e930b078b26cee8e47e61a3b"} Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.591266 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d2fd123d3fd1994ef0cbaca5c638365d7921816e930b078b26cee8e47e61a3b" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.591353 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-kp8cl" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.593987 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-cr4fx" Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.594406 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-cr4fx" event={"ID":"fc374618-2dac-4256-9048-76b3774d35b8","Type":"ContainerDied","Data":"02f22402b8f1d61a8a60eef978094928bb8f5f834ca4eca302144bda05b5e6f5"} Dec 06 03:29:19 crc kubenswrapper[4801]: I1206 03:29:19.594434 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f22402b8f1d61a8a60eef978094928bb8f5f834ca4eca302144bda05b5e6f5" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.017449 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.138640 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3655d081-5002-4403-869e-e027935e4f0b-operator-scripts\") pod \"3655d081-5002-4403-869e-e027935e4f0b\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.139038 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgkhq\" (UniqueName: \"kubernetes.io/projected/3655d081-5002-4403-869e-e027935e4f0b-kube-api-access-pgkhq\") pod \"3655d081-5002-4403-869e-e027935e4f0b\" (UID: \"3655d081-5002-4403-869e-e027935e4f0b\") " Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.140524 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3655d081-5002-4403-869e-e027935e4f0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3655d081-5002-4403-869e-e027935e4f0b" (UID: "3655d081-5002-4403-869e-e027935e4f0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.143535 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3655d081-5002-4403-869e-e027935e4f0b-kube-api-access-pgkhq" (OuterVolumeSpecName: "kube-api-access-pgkhq") pod "3655d081-5002-4403-869e-e027935e4f0b" (UID: "3655d081-5002-4403-869e-e027935e4f0b"). InnerVolumeSpecName "kube-api-access-pgkhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.240997 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3655d081-5002-4403-869e-e027935e4f0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.241043 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgkhq\" (UniqueName: \"kubernetes.io/projected/3655d081-5002-4403-869e-e027935e4f0b-kube-api-access-pgkhq\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.604163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" event={"ID":"3655d081-5002-4403-869e-e027935e4f0b","Type":"ContainerDied","Data":"a65ba33a0ad20d5a33bdbfbd33a1774b5a888aff8d1e039ad4fad9985625cd9e"} Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.604205 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be62-account-create-update-pxz7k" Dec 06 03:29:20 crc kubenswrapper[4801]: I1206 03:29:20.604213 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65ba33a0ad20d5a33bdbfbd33a1774b5a888aff8d1e039ad4fad9985625cd9e" Dec 06 03:29:21 crc kubenswrapper[4801]: I1206 03:29:21.634147 4801 generic.go:334] "Generic (PLEG): container finished" podID="80e58a01-f644-4664-8d9b-f7c22938e4aa" containerID="8cf1887b8275b3cb968c161a9697a9fddb6b57fb7b4ea9f0e06b4576687a7c37" exitCode=0 Dec 06 03:29:21 crc kubenswrapper[4801]: I1206 03:29:21.634387 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2df8-account-create-update-w9l8q" event={"ID":"80e58a01-f644-4664-8d9b-f7c22938e4aa","Type":"ContainerDied","Data":"8cf1887b8275b3cb968c161a9697a9fddb6b57fb7b4ea9f0e06b4576687a7c37"} Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.323411 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7h8zf"] Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324152 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec233495-e3c7-4268-8eb9-532e73143533" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324174 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec233495-e3c7-4268-8eb9-532e73143533" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324191 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="dnsmasq-dns" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324198 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="dnsmasq-dns" Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324210 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d727fb0f-a514-492e-9e91-df76ceccf42d" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324216 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d727fb0f-a514-492e-9e91-df76ceccf42d" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324233 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc374618-2dac-4256-9048-76b3774d35b8" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324239 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc374618-2dac-4256-9048-76b3774d35b8" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324253 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="init" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324259 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="init" Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324268 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b03c3261-5b37-43cb-8148-a9e709c13a1e" containerName="mariadb-account-create-update" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324273 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b03c3261-5b37-43cb-8148-a9e709c13a1e" containerName="mariadb-account-create-update" Dec 06 03:29:22 crc kubenswrapper[4801]: E1206 03:29:22.324289 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3655d081-5002-4403-869e-e027935e4f0b" containerName="mariadb-account-create-update" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324295 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3655d081-5002-4403-869e-e027935e4f0b" containerName="mariadb-account-create-update" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324470 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec233495-e3c7-4268-8eb9-532e73143533" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324489 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b03c3261-5b37-43cb-8148-a9e709c13a1e" containerName="mariadb-account-create-update" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324498 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3655d081-5002-4403-869e-e027935e4f0b" containerName="mariadb-account-create-update" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324513 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d727fb0f-a514-492e-9e91-df76ceccf42d" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324521 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="065ef35f-50b6-4eb5-b46c-961b40e0e29f" containerName="dnsmasq-dns" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.324531 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc374618-2dac-4256-9048-76b3774d35b8" containerName="mariadb-database-create" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.325104 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.328564 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m7qht" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.328839 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.329020 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.349495 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7h8zf"] Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.380973 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6k9m\" (UniqueName: \"kubernetes.io/projected/23456664-b3cb-40c4-a0a1-a944eef10179-kube-api-access-l6k9m\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.381076 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-config-data\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.381113 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-scripts\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.381172 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.483557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-config-data\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.483630 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-scripts\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.483705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.483815 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6k9m\" (UniqueName: \"kubernetes.io/projected/23456664-b3cb-40c4-a0a1-a944eef10179-kube-api-access-l6k9m\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.490857 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-scripts\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.491090 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-config-data\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.491642 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.510514 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6k9m\" (UniqueName: \"kubernetes.io/projected/23456664-b3cb-40c4-a0a1-a944eef10179-kube-api-access-l6k9m\") pod \"nova-cell0-conductor-db-sync-7h8zf\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.644314 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.672271 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerStarted","Data":"cf3f11f30abd8eb03756e5938af0a1e47c86c68619125ada065e2b975450975e"} Dec 06 03:29:22 crc kubenswrapper[4801]: I1206 03:29:22.873346 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.119946 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.208680 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e58a01-f644-4664-8d9b-f7c22938e4aa-operator-scripts\") pod \"80e58a01-f644-4664-8d9b-f7c22938e4aa\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.209848 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80e58a01-f644-4664-8d9b-f7c22938e4aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "80e58a01-f644-4664-8d9b-f7c22938e4aa" (UID: "80e58a01-f644-4664-8d9b-f7c22938e4aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.210090 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n84x6\" (UniqueName: \"kubernetes.io/projected/80e58a01-f644-4664-8d9b-f7c22938e4aa-kube-api-access-n84x6\") pod \"80e58a01-f644-4664-8d9b-f7c22938e4aa\" (UID: \"80e58a01-f644-4664-8d9b-f7c22938e4aa\") " Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.211840 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/80e58a01-f644-4664-8d9b-f7c22938e4aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.220541 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e58a01-f644-4664-8d9b-f7c22938e4aa-kube-api-access-n84x6" (OuterVolumeSpecName: "kube-api-access-n84x6") pod "80e58a01-f644-4664-8d9b-f7c22938e4aa" (UID: "80e58a01-f644-4664-8d9b-f7c22938e4aa"). InnerVolumeSpecName "kube-api-access-n84x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.249863 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7h8zf"] Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.313460 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n84x6\" (UniqueName: \"kubernetes.io/projected/80e58a01-f644-4664-8d9b-f7c22938e4aa-kube-api-access-n84x6\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.684365 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2df8-account-create-update-w9l8q" event={"ID":"80e58a01-f644-4664-8d9b-f7c22938e4aa","Type":"ContainerDied","Data":"3783c4e8887ba61b0d8cd832dcc344f03fc6684dd996e06360e9fdbe9cc5aeb6"} Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.684778 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3783c4e8887ba61b0d8cd832dcc344f03fc6684dd996e06360e9fdbe9cc5aeb6" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.684862 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2df8-account-create-update-w9l8q" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.702782 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177","Type":"ContainerStarted","Data":"334aec3436e2baa74bfff5b4da17383359565fb8197ec5f6767f1919f0c68189"} Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.714014 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerStarted","Data":"3f63a476d9a8bdb4b56e5881a893185325a921c598a9481128c1265f94211794"} Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.714290 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-central-agent" containerID="cri-o://137fc6182fdfdd0b00730cd6ccd69179255145a07af4a7dad6bf05c2c71b7594" gracePeriod=30 Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.714701 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.715122 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="proxy-httpd" containerID="cri-o://3f63a476d9a8bdb4b56e5881a893185325a921c598a9481128c1265f94211794" gracePeriod=30 Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.715179 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="sg-core" containerID="cri-o://cf3f11f30abd8eb03756e5938af0a1e47c86c68619125ada065e2b975450975e" gracePeriod=30 Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.715225 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-notification-agent" containerID="cri-o://7dfa3cba33ed581b1e3409051a3870e52726c2e705e26369b5ae71325ca2db93" gracePeriod=30 Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.726659 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" event={"ID":"23456664-b3cb-40c4-a0a1-a944eef10179","Type":"ContainerStarted","Data":"30b248710982419777a7b7facf2a069af78a80dc16b62c836b4199d3e5a935b0"} Dec 06 03:29:23 crc kubenswrapper[4801]: I1206 03:29:23.754440 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.131292878 podStartE2EDuration="16.754408263s" podCreationTimestamp="2025-12-06 03:29:07 +0000 UTC" firstStartedPulling="2025-12-06 03:29:08.64768013 +0000 UTC m=+1401.770287702" lastFinishedPulling="2025-12-06 03:29:23.270795515 +0000 UTC m=+1416.393403087" observedRunningTime="2025-12-06 03:29:23.749532199 +0000 UTC m=+1416.872139771" watchObservedRunningTime="2025-12-06 03:29:23.754408263 +0000 UTC m=+1416.877015855" Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.750369 4801 generic.go:334] "Generic (PLEG): container finished" podID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerID="3f63a476d9a8bdb4b56e5881a893185325a921c598a9481128c1265f94211794" exitCode=0 Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.751002 4801 generic.go:334] "Generic (PLEG): container finished" podID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerID="cf3f11f30abd8eb03756e5938af0a1e47c86c68619125ada065e2b975450975e" exitCode=2 Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.750412 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerDied","Data":"3f63a476d9a8bdb4b56e5881a893185325a921c598a9481128c1265f94211794"} Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.751050 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerDied","Data":"cf3f11f30abd8eb03756e5938af0a1e47c86c68619125ada065e2b975450975e"} Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.751068 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerDied","Data":"7dfa3cba33ed581b1e3409051a3870e52726c2e705e26369b5ae71325ca2db93"} Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.751015 4801 generic.go:334] "Generic (PLEG): container finished" podID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerID="7dfa3cba33ed581b1e3409051a3870e52726c2e705e26369b5ae71325ca2db93" exitCode=0 Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.751092 4801 generic.go:334] "Generic (PLEG): container finished" podID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerID="137fc6182fdfdd0b00730cd6ccd69179255145a07af4a7dad6bf05c2c71b7594" exitCode=0 Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.751153 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerDied","Data":"137fc6182fdfdd0b00730cd6ccd69179255145a07af4a7dad6bf05c2c71b7594"} Dec 06 03:29:24 crc kubenswrapper[4801]: I1206 03:29:24.758859 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177","Type":"ContainerStarted","Data":"73117ba5e315a920cc42a5dfcfb5c5f3b818e0980e03a07255143171acdc7aba"} Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.092844 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.127035 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.61955886 podStartE2EDuration="50.127015398s" podCreationTimestamp="2025-12-06 03:28:35 +0000 UTC" firstStartedPulling="2025-12-06 03:28:38.077884886 +0000 UTC m=+1371.200492458" lastFinishedPulling="2025-12-06 03:29:22.585341424 +0000 UTC m=+1415.707948996" observedRunningTime="2025-12-06 03:29:24.781342613 +0000 UTC m=+1417.903950205" watchObservedRunningTime="2025-12-06 03:29:25.127015398 +0000 UTC m=+1418.249622970" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.153971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-log-httpd\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.154716 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.154719 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-scripts\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.154826 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-combined-ca-bundle\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.154868 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-sg-core-conf-yaml\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.154897 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-config-data\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.155006 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgpxw\" (UniqueName: \"kubernetes.io/projected/8af9567c-7fd3-4a06-8a78-2acc56974fe7-kube-api-access-bgpxw\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.155085 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-run-httpd\") pod \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\" (UID: \"8af9567c-7fd3-4a06-8a78-2acc56974fe7\") " Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.155934 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.162743 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-scripts" (OuterVolumeSpecName: "scripts") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.162955 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.175447 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af9567c-7fd3-4a06-8a78-2acc56974fe7-kube-api-access-bgpxw" (OuterVolumeSpecName: "kube-api-access-bgpxw") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "kube-api-access-bgpxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.228148 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.257637 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgpxw\" (UniqueName: \"kubernetes.io/projected/8af9567c-7fd3-4a06-8a78-2acc56974fe7-kube-api-access-bgpxw\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.258037 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8af9567c-7fd3-4a06-8a78-2acc56974fe7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.258684 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.258719 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.264966 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.313951 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-config-data" (OuterVolumeSpecName: "config-data") pod "8af9567c-7fd3-4a06-8a78-2acc56974fe7" (UID: "8af9567c-7fd3-4a06-8a78-2acc56974fe7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.359995 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.360046 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8af9567c-7fd3-4a06-8a78-2acc56974fe7-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.650083 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.776341 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.784144 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.784849 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8af9567c-7fd3-4a06-8a78-2acc56974fe7","Type":"ContainerDied","Data":"95054bf3388626baceabc0bc9b011645a07ade75a30fb5bd78ad29c08ee2cf80"} Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.784928 4801 scope.go:117] "RemoveContainer" containerID="3f63a476d9a8bdb4b56e5881a893185325a921c598a9481128c1265f94211794" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.820500 4801 scope.go:117] "RemoveContainer" containerID="cf3f11f30abd8eb03756e5938af0a1e47c86c68619125ada065e2b975450975e" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.820640 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.826906 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.853357 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:25 crc kubenswrapper[4801]: E1206 03:29:25.853690 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-central-agent" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.853706 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-central-agent" Dec 06 03:29:25 crc kubenswrapper[4801]: E1206 03:29:25.853720 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-notification-agent" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.853726 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-notification-agent" Dec 06 03:29:25 crc kubenswrapper[4801]: E1206 03:29:25.853741 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e58a01-f644-4664-8d9b-f7c22938e4aa" containerName="mariadb-account-create-update" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.853749 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e58a01-f644-4664-8d9b-f7c22938e4aa" containerName="mariadb-account-create-update" Dec 06 03:29:25 crc kubenswrapper[4801]: E1206 03:29:25.853785 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="proxy-httpd" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.853792 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="proxy-httpd" Dec 06 03:29:25 crc kubenswrapper[4801]: E1206 03:29:25.853803 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="sg-core" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.853808 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="sg-core" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.854271 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e58a01-f644-4664-8d9b-f7c22938e4aa" containerName="mariadb-account-create-update" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.854286 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="sg-core" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.854299 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="proxy-httpd" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.854340 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-central-agent" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.854352 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" containerName="ceilometer-notification-agent" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.856434 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.861009 4801 scope.go:117] "RemoveContainer" containerID="7dfa3cba33ed581b1e3409051a3870e52726c2e705e26369b5ae71325ca2db93" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.861157 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.871159 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.887530 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.903890 4801 scope.go:117] "RemoveContainer" containerID="137fc6182fdfdd0b00730cd6ccd69179255145a07af4a7dad6bf05c2c71b7594" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.971911 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-config-data\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.971983 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.972023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-scripts\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.972041 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8tx\" (UniqueName: \"kubernetes.io/projected/750085a0-ce6c-4e41-872a-2dac5106fc4c-kube-api-access-sw8tx\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.972086 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-log-httpd\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.972116 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-run-httpd\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:25 crc kubenswrapper[4801]: I1206 03:29:25.972166 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073317 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-scripts\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073367 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8tx\" (UniqueName: \"kubernetes.io/projected/750085a0-ce6c-4e41-872a-2dac5106fc4c-kube-api-access-sw8tx\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073420 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-log-httpd\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073453 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-run-httpd\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073506 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-config-data\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.073564 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.074539 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-log-httpd\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.074693 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-run-httpd\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.089408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.090320 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.090534 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-config-data\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.092929 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8tx\" (UniqueName: \"kubernetes.io/projected/750085a0-ce6c-4e41-872a-2dac5106fc4c-kube-api-access-sw8tx\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.095589 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-scripts\") pod \"ceilometer-0\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.181836 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.678771 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:26 crc kubenswrapper[4801]: I1206 03:29:26.800307 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerStarted","Data":"9e90c359d7a6794c662070d2c80580e0f200b410cdc32376d130c9ecb9ec4852"} Dec 06 03:29:27 crc kubenswrapper[4801]: I1206 03:29:27.226677 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af9567c-7fd3-4a06-8a78-2acc56974fe7" path="/var/lib/kubelet/pods/8af9567c-7fd3-4a06-8a78-2acc56974fe7/volumes" Dec 06 03:29:31 crc kubenswrapper[4801]: I1206 03:29:31.019702 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 03:29:31 crc kubenswrapper[4801]: I1206 03:29:31.087876 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:29:31 crc kubenswrapper[4801]: I1206 03:29:31.862703 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="cinder-scheduler" containerID="cri-o://334aec3436e2baa74bfff5b4da17383359565fb8197ec5f6767f1919f0c68189" gracePeriod=30 Dec 06 03:29:31 crc kubenswrapper[4801]: I1206 03:29:31.863671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerStarted","Data":"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016"} Dec 06 03:29:31 crc kubenswrapper[4801]: I1206 03:29:31.864284 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="probe" containerID="cri-o://73117ba5e315a920cc42a5dfcfb5c5f3b818e0980e03a07255143171acdc7aba" gracePeriod=30 Dec 06 03:29:32 crc kubenswrapper[4801]: I1206 03:29:32.877316 4801 generic.go:334] "Generic (PLEG): container finished" podID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerID="73117ba5e315a920cc42a5dfcfb5c5f3b818e0980e03a07255143171acdc7aba" exitCode=0 Dec 06 03:29:32 crc kubenswrapper[4801]: I1206 03:29:32.877375 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177","Type":"ContainerDied","Data":"73117ba5e315a920cc42a5dfcfb5c5f3b818e0980e03a07255143171acdc7aba"} Dec 06 03:29:33 crc kubenswrapper[4801]: I1206 03:29:33.888879 4801 generic.go:334] "Generic (PLEG): container finished" podID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerID="334aec3436e2baa74bfff5b4da17383359565fb8197ec5f6767f1919f0c68189" exitCode=0 Dec 06 03:29:33 crc kubenswrapper[4801]: I1206 03:29:33.888939 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177","Type":"ContainerDied","Data":"334aec3436e2baa74bfff5b4da17383359565fb8197ec5f6767f1919f0c68189"} Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.059630 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:29:39 crc kubenswrapper[4801]: E1206 03:29:39.444633 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Dec 06 03:29:39 crc kubenswrapper[4801]: E1206 03:29:39.445334 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6k9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-7h8zf_openstack(23456664-b3cb-40c4-a0a1-a944eef10179): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 03:29:39 crc kubenswrapper[4801]: E1206 03:29:39.447165 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" podUID="23456664-b3cb-40c4-a0a1-a944eef10179" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.758628 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.860872 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-combined-ca-bundle\") pod \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.860963 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-scripts\") pod \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.861020 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-etc-machine-id\") pod \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.861064 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data\") pod \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.861144 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7zgn\" (UniqueName: \"kubernetes.io/projected/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-kube-api-access-v7zgn\") pod \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.861151 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" (UID: "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.861238 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data-custom\") pod \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\" (UID: \"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177\") " Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.861689 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.866178 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" (UID: "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.866274 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-kube-api-access-v7zgn" (OuterVolumeSpecName: "kube-api-access-v7zgn") pod "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" (UID: "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177"). InnerVolumeSpecName "kube-api-access-v7zgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.866576 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-scripts" (OuterVolumeSpecName: "scripts") pod "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" (UID: "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.927696 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" (UID: "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.956901 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerStarted","Data":"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60"} Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.958898 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177","Type":"ContainerDied","Data":"08b7a437f5a64123bae54ceba26a433b632b0c9fb9944e805b595fafd9fab681"} Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.958920 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.958961 4801 scope.go:117] "RemoveContainer" containerID="73117ba5e315a920cc42a5dfcfb5c5f3b818e0980e03a07255143171acdc7aba" Dec 06 03:29:39 crc kubenswrapper[4801]: E1206 03:29:39.960728 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" podUID="23456664-b3cb-40c4-a0a1-a944eef10179" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.962950 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.962971 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.962985 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.962994 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7zgn\" (UniqueName: \"kubernetes.io/projected/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-kube-api-access-v7zgn\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.974390 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data" (OuterVolumeSpecName: "config-data") pod "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" (UID: "f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:39 crc kubenswrapper[4801]: I1206 03:29:39.988180 4801 scope.go:117] "RemoveContainer" containerID="334aec3436e2baa74bfff5b4da17383359565fb8197ec5f6767f1919f0c68189" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.066062 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.297454 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.309329 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.338710 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:29:40 crc kubenswrapper[4801]: E1206 03:29:40.339386 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="probe" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.339499 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="probe" Dec 06 03:29:40 crc kubenswrapper[4801]: E1206 03:29:40.339623 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="cinder-scheduler" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.339686 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="cinder-scheduler" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.340005 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="cinder-scheduler" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.340114 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" containerName="probe" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.341312 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.346037 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.360410 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.506074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.506461 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.506976 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.507277 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.507344 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzxg\" (UniqueName: \"kubernetes.io/projected/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-kube-api-access-wdzxg\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.507403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608796 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608857 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608876 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdzxg\" (UniqueName: \"kubernetes.io/projected/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-kube-api-access-wdzxg\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608895 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608899 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608934 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.608968 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.614975 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-config-data\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.615492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-scripts\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.616220 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.623922 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.629339 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdzxg\" (UniqueName: \"kubernetes.io/projected/4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241-kube-api-access-wdzxg\") pod \"cinder-scheduler-0\" (UID: \"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241\") " pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.667095 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.972586 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:40 crc kubenswrapper[4801]: I1206 03:29:40.976516 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerStarted","Data":"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da"} Dec 06 03:29:41 crc kubenswrapper[4801]: I1206 03:29:41.183037 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 03:29:41 crc kubenswrapper[4801]: W1206 03:29:41.194241 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ac98ef4_8cfc_440f_b9b6_f1a92ae7e241.slice/crio-891f73999c8a0c5e525d8e77f4887e9d814224de11f275eeb228a776778aa1d2 WatchSource:0}: Error finding container 891f73999c8a0c5e525d8e77f4887e9d814224de11f275eeb228a776778aa1d2: Status 404 returned error can't find the container with id 891f73999c8a0c5e525d8e77f4887e9d814224de11f275eeb228a776778aa1d2 Dec 06 03:29:41 crc kubenswrapper[4801]: I1206 03:29:41.226589 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177" path="/var/lib/kubelet/pods/f0e221d0-6ea7-4bf8-b89e-5c4b65d9e177/volumes" Dec 06 03:29:42 crc kubenswrapper[4801]: I1206 03:29:42.028220 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241","Type":"ContainerStarted","Data":"891f73999c8a0c5e525d8e77f4887e9d814224de11f275eeb228a776778aa1d2"} Dec 06 03:29:42 crc kubenswrapper[4801]: I1206 03:29:42.208860 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-554d4f888f-vn47n" Dec 06 03:29:42 crc kubenswrapper[4801]: I1206 03:29:42.276122 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c6fcb784-b9mbt"] Dec 06 03:29:42 crc kubenswrapper[4801]: I1206 03:29:42.276403 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c6fcb784-b9mbt" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-api" containerID="cri-o://e5af1605c3016f7ad5c03ce67dc882858c2ec53a4da1a0729ad09e2517eeb1bf" gracePeriod=30 Dec 06 03:29:42 crc kubenswrapper[4801]: I1206 03:29:42.276914 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74c6fcb784-b9mbt" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" containerID="cri-o://ef83a78b23fc03233b49fa83bb7bf37edf125046cf6daeb132eaff1dae7fe869" gracePeriod=30 Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.049896 4801 generic.go:334] "Generic (PLEG): container finished" podID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerID="ef83a78b23fc03233b49fa83bb7bf37edf125046cf6daeb132eaff1dae7fe869" exitCode=0 Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.049982 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6fcb784-b9mbt" event={"ID":"d93d32ae-f984-4eac-9fdf-80479f40f4bb","Type":"ContainerDied","Data":"ef83a78b23fc03233b49fa83bb7bf37edf125046cf6daeb132eaff1dae7fe869"} Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.057559 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241","Type":"ContainerStarted","Data":"c038b6a0030bb6ee95db2b4eb644f91fd6e098b387a4ab25b5905c453303686a"} Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.067090 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerStarted","Data":"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e"} Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.067220 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-central-agent" containerID="cri-o://ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" gracePeriod=30 Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.067242 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.067320 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="proxy-httpd" containerID="cri-o://997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" gracePeriod=30 Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.067364 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="sg-core" containerID="cri-o://aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" gracePeriod=30 Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.067400 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-notification-agent" containerID="cri-o://24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" gracePeriod=30 Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.103650 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.348913625 podStartE2EDuration="18.103625073s" podCreationTimestamp="2025-12-06 03:29:25 +0000 UTC" firstStartedPulling="2025-12-06 03:29:26.68923204 +0000 UTC m=+1419.811839612" lastFinishedPulling="2025-12-06 03:29:42.443943488 +0000 UTC m=+1435.566551060" observedRunningTime="2025-12-06 03:29:43.095953314 +0000 UTC m=+1436.218560896" watchObservedRunningTime="2025-12-06 03:29:43.103625073 +0000 UTC m=+1436.226232635" Dec 06 03:29:43 crc kubenswrapper[4801]: I1206 03:29:43.967971 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.070927 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-combined-ca-bundle\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071068 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-run-httpd\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071117 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-scripts\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071164 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-log-httpd\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071299 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-config-data\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071347 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-sg-core-conf-yaml\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071490 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8tx\" (UniqueName: \"kubernetes.io/projected/750085a0-ce6c-4e41-872a-2dac5106fc4c-kube-api-access-sw8tx\") pod \"750085a0-ce6c-4e41-872a-2dac5106fc4c\" (UID: \"750085a0-ce6c-4e41-872a-2dac5106fc4c\") " Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.071908 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.072817 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.072862 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.081029 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-scripts" (OuterVolumeSpecName: "scripts") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.084792 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241","Type":"ContainerStarted","Data":"a3b12078f9f23f46917d548cec269977e4188ae510cd9f901401788a9ac8dccf"} Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.089626 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750085a0-ce6c-4e41-872a-2dac5106fc4c-kube-api-access-sw8tx" (OuterVolumeSpecName: "kube-api-access-sw8tx") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "kube-api-access-sw8tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090172 4801 generic.go:334] "Generic (PLEG): container finished" podID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" exitCode=0 Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090218 4801 generic.go:334] "Generic (PLEG): container finished" podID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" exitCode=2 Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090230 4801 generic.go:334] "Generic (PLEG): container finished" podID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" exitCode=0 Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090240 4801 generic.go:334] "Generic (PLEG): container finished" podID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" exitCode=0 Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090266 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerDied","Data":"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e"} Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090298 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerDied","Data":"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da"} Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090312 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerDied","Data":"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60"} Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090326 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerDied","Data":"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016"} Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090337 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"750085a0-ce6c-4e41-872a-2dac5106fc4c","Type":"ContainerDied","Data":"9e90c359d7a6794c662070d2c80580e0f200b410cdc32376d130c9ecb9ec4852"} Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090358 4801 scope.go:117] "RemoveContainer" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.090717 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.115659 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.122394 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.12237023 podStartE2EDuration="4.12237023s" podCreationTimestamp="2025-12-06 03:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:29:44.108302205 +0000 UTC m=+1437.230909777" watchObservedRunningTime="2025-12-06 03:29:44.12237023 +0000 UTC m=+1437.244977792" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.128509 4801 scope.go:117] "RemoveContainer" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.151176 4801 scope.go:117] "RemoveContainer" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.159387 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.175671 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.175715 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/750085a0-ce6c-4e41-872a-2dac5106fc4c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.175727 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.175740 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8tx\" (UniqueName: \"kubernetes.io/projected/750085a0-ce6c-4e41-872a-2dac5106fc4c-kube-api-access-sw8tx\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.175751 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.184693 4801 scope.go:117] "RemoveContainer" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.215589 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-config-data" (OuterVolumeSpecName: "config-data") pod "750085a0-ce6c-4e41-872a-2dac5106fc4c" (UID: "750085a0-ce6c-4e41-872a-2dac5106fc4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.232117 4801 scope.go:117] "RemoveContainer" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.233809 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": container with ID starting with 997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e not found: ID does not exist" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.233878 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e"} err="failed to get container status \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": rpc error: code = NotFound desc = could not find container \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": container with ID starting with 997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.233919 4801 scope.go:117] "RemoveContainer" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.234270 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": container with ID starting with aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da not found: ID does not exist" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.234301 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da"} err="failed to get container status \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": rpc error: code = NotFound desc = could not find container \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": container with ID starting with aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.234317 4801 scope.go:117] "RemoveContainer" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.234548 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": container with ID starting with 24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60 not found: ID does not exist" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.234572 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60"} err="failed to get container status \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": rpc error: code = NotFound desc = could not find container \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": container with ID starting with 24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.234589 4801 scope.go:117] "RemoveContainer" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.234931 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": container with ID starting with ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016 not found: ID does not exist" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.234955 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016"} err="failed to get container status \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": rpc error: code = NotFound desc = could not find container \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": container with ID starting with ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.234975 4801 scope.go:117] "RemoveContainer" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.235280 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e"} err="failed to get container status \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": rpc error: code = NotFound desc = could not find container \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": container with ID starting with 997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.235315 4801 scope.go:117] "RemoveContainer" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.235734 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da"} err="failed to get container status \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": rpc error: code = NotFound desc = could not find container \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": container with ID starting with aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.235769 4801 scope.go:117] "RemoveContainer" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.236012 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60"} err="failed to get container status \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": rpc error: code = NotFound desc = could not find container \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": container with ID starting with 24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.236031 4801 scope.go:117] "RemoveContainer" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.237069 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016"} err="failed to get container status \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": rpc error: code = NotFound desc = could not find container \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": container with ID starting with ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.237170 4801 scope.go:117] "RemoveContainer" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.237436 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e"} err="failed to get container status \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": rpc error: code = NotFound desc = could not find container \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": container with ID starting with 997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.237465 4801 scope.go:117] "RemoveContainer" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.237694 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da"} err="failed to get container status \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": rpc error: code = NotFound desc = could not find container \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": container with ID starting with aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.237736 4801 scope.go:117] "RemoveContainer" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238142 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60"} err="failed to get container status \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": rpc error: code = NotFound desc = could not find container \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": container with ID starting with 24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238164 4801 scope.go:117] "RemoveContainer" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238343 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016"} err="failed to get container status \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": rpc error: code = NotFound desc = could not find container \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": container with ID starting with ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238364 4801 scope.go:117] "RemoveContainer" containerID="997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238534 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e"} err="failed to get container status \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": rpc error: code = NotFound desc = could not find container \"997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e\": container with ID starting with 997e1048242473f8dbffeb15df890634bebcc5fd7f44687917c2f5b6b7481c8e not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238553 4801 scope.go:117] "RemoveContainer" containerID="aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238708 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da"} err="failed to get container status \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": rpc error: code = NotFound desc = could not find container \"aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da\": container with ID starting with aae82c3ff37b19565c47bc9ff4ce9b49441e2fe596c9a1b7f193cfc502a788da not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238722 4801 scope.go:117] "RemoveContainer" containerID="24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238900 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60"} err="failed to get container status \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": rpc error: code = NotFound desc = could not find container \"24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60\": container with ID starting with 24218b16e7b5030a41457a064e7c6e5a5fa54180ffef4aea0540412be61f2c60 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.238913 4801 scope.go:117] "RemoveContainer" containerID="ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.239076 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016"} err="failed to get container status \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": rpc error: code = NotFound desc = could not find container \"ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016\": container with ID starting with ba4cebe0b65e1b1c2620193ae1353f43f3827e4ad2d2147b6aedfd8747e07016 not found: ID does not exist" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.277117 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750085a0-ce6c-4e41-872a-2dac5106fc4c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.456523 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.469255 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.475713 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.476236 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="proxy-httpd" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476255 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="proxy-httpd" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.476273 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-notification-agent" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476281 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-notification-agent" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.476301 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="sg-core" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476309 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="sg-core" Dec 06 03:29:44 crc kubenswrapper[4801]: E1206 03:29:44.476343 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-central-agent" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476351 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-central-agent" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476572 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-central-agent" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476591 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="proxy-httpd" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476609 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="ceilometer-notification-agent" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.476626 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" containerName="sg-core" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.478875 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.486836 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.486928 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.488679 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.585814 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-595q9\" (UniqueName: \"kubernetes.io/projected/ad020bd8-f121-4606-94d5-c67546885c5b-kube-api-access-595q9\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.585903 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-config-data\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.585988 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.586013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-log-httpd\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.586089 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-run-httpd\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.586128 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.586153 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-scripts\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687366 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-run-httpd\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687428 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687457 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-scripts\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687486 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-595q9\" (UniqueName: \"kubernetes.io/projected/ad020bd8-f121-4606-94d5-c67546885c5b-kube-api-access-595q9\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687538 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-config-data\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687637 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.687665 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-log-httpd\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.688208 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-log-httpd\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.688479 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-run-httpd\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.693281 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.693664 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.694657 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-config-data\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.695231 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-scripts\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.708869 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-595q9\" (UniqueName: \"kubernetes.io/projected/ad020bd8-f121-4606-94d5-c67546885c5b-kube-api-access-595q9\") pod \"ceilometer-0\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " pod="openstack/ceilometer-0" Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.763972 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:44 crc kubenswrapper[4801]: I1206 03:29:44.764877 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:45 crc kubenswrapper[4801]: I1206 03:29:45.227308 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750085a0-ce6c-4e41-872a-2dac5106fc4c" path="/var/lib/kubelet/pods/750085a0-ce6c-4e41-872a-2dac5106fc4c/volumes" Dec 06 03:29:45 crc kubenswrapper[4801]: I1206 03:29:45.278275 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:45 crc kubenswrapper[4801]: I1206 03:29:45.667886 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 03:29:46 crc kubenswrapper[4801]: I1206 03:29:46.114546 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerStarted","Data":"e32fbee07f8dd7e06892298544d0ca82bef0f7a2b7c80580e16c447a69f7c860"} Dec 06 03:29:46 crc kubenswrapper[4801]: I1206 03:29:46.115046 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerStarted","Data":"b4faa8a4c68fa34c6a3eea74d2dd00fdf23dcd905902865d380a4349e547488d"} Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.128864 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerStarted","Data":"93780e205618db7bb768dd5eb952266589350a76a78549d4ab36a1e321a736c3"} Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.132379 4801 generic.go:334] "Generic (PLEG): container finished" podID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerID="e5af1605c3016f7ad5c03ce67dc882858c2ec53a4da1a0729ad09e2517eeb1bf" exitCode=0 Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.132433 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6fcb784-b9mbt" event={"ID":"d93d32ae-f984-4eac-9fdf-80479f40f4bb","Type":"ContainerDied","Data":"e5af1605c3016f7ad5c03ce67dc882858c2ec53a4da1a0729ad09e2517eeb1bf"} Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.283681 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.442861 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-httpd-config\") pod \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.442923 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-config\") pod \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.442954 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-combined-ca-bundle\") pod \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.443074 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-ovndb-tls-certs\") pod \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.443120 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sscqs\" (UniqueName: \"kubernetes.io/projected/d93d32ae-f984-4eac-9fdf-80479f40f4bb-kube-api-access-sscqs\") pod \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\" (UID: \"d93d32ae-f984-4eac-9fdf-80479f40f4bb\") " Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.447919 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d93d32ae-f984-4eac-9fdf-80479f40f4bb" (UID: "d93d32ae-f984-4eac-9fdf-80479f40f4bb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.448259 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d93d32ae-f984-4eac-9fdf-80479f40f4bb-kube-api-access-sscqs" (OuterVolumeSpecName: "kube-api-access-sscqs") pod "d93d32ae-f984-4eac-9fdf-80479f40f4bb" (UID: "d93d32ae-f984-4eac-9fdf-80479f40f4bb"). InnerVolumeSpecName "kube-api-access-sscqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.494360 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-config" (OuterVolumeSpecName: "config") pod "d93d32ae-f984-4eac-9fdf-80479f40f4bb" (UID: "d93d32ae-f984-4eac-9fdf-80479f40f4bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.498839 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d93d32ae-f984-4eac-9fdf-80479f40f4bb" (UID: "d93d32ae-f984-4eac-9fdf-80479f40f4bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.515903 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d93d32ae-f984-4eac-9fdf-80479f40f4bb" (UID: "d93d32ae-f984-4eac-9fdf-80479f40f4bb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.545396 4801 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.545438 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sscqs\" (UniqueName: \"kubernetes.io/projected/d93d32ae-f984-4eac-9fdf-80479f40f4bb-kube-api-access-sscqs\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.545483 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.545495 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:47 crc kubenswrapper[4801]: I1206 03:29:47.545507 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d93d32ae-f984-4eac-9fdf-80479f40f4bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.143797 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerStarted","Data":"e4f949d33959ab6b51c4d9410c64f70a619b5bf9595aac026a694a1cbfdca19a"} Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.146067 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74c6fcb784-b9mbt" event={"ID":"d93d32ae-f984-4eac-9fdf-80479f40f4bb","Type":"ContainerDied","Data":"4c18a16dd6af8ca679a812e5f2156b3f5aceffdb099d8adcbaf70daa0ca3c5fc"} Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.146132 4801 scope.go:117] "RemoveContainer" containerID="ef83a78b23fc03233b49fa83bb7bf37edf125046cf6daeb132eaff1dae7fe869" Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.146151 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74c6fcb784-b9mbt" Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.174673 4801 scope.go:117] "RemoveContainer" containerID="e5af1605c3016f7ad5c03ce67dc882858c2ec53a4da1a0729ad09e2517eeb1bf" Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.183869 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74c6fcb784-b9mbt"] Dec 06 03:29:48 crc kubenswrapper[4801]: I1206 03:29:48.193967 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74c6fcb784-b9mbt"] Dec 06 03:29:49 crc kubenswrapper[4801]: I1206 03:29:49.223472 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" path="/var/lib/kubelet/pods/d93d32ae-f984-4eac-9fdf-80479f40f4bb/volumes" Dec 06 03:29:50 crc kubenswrapper[4801]: I1206 03:29:50.877712 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.172128 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerStarted","Data":"c4a053d45b244f4d07e058916e74b674b89a63da73ccd6976c8255bae7810fe4"} Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.172270 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.172259 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-central-agent" containerID="cri-o://e32fbee07f8dd7e06892298544d0ca82bef0f7a2b7c80580e16c447a69f7c860" gracePeriod=30 Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.172310 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="proxy-httpd" containerID="cri-o://c4a053d45b244f4d07e058916e74b674b89a63da73ccd6976c8255bae7810fe4" gracePeriod=30 Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.172369 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="sg-core" containerID="cri-o://e4f949d33959ab6b51c4d9410c64f70a619b5bf9595aac026a694a1cbfdca19a" gracePeriod=30 Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.172421 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-notification-agent" containerID="cri-o://93780e205618db7bb768dd5eb952266589350a76a78549d4ab36a1e321a736c3" gracePeriod=30 Dec 06 03:29:51 crc kubenswrapper[4801]: I1206 03:29:51.206078 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.447996975 podStartE2EDuration="7.206055884s" podCreationTimestamp="2025-12-06 03:29:44 +0000 UTC" firstStartedPulling="2025-12-06 03:29:45.282680999 +0000 UTC m=+1438.405288571" lastFinishedPulling="2025-12-06 03:29:50.040739908 +0000 UTC m=+1443.163347480" observedRunningTime="2025-12-06 03:29:51.202131056 +0000 UTC m=+1444.324738648" watchObservedRunningTime="2025-12-06 03:29:51.206055884 +0000 UTC m=+1444.328663456" Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.190801 4801 generic.go:334] "Generic (PLEG): container finished" podID="ad020bd8-f121-4606-94d5-c67546885c5b" containerID="c4a053d45b244f4d07e058916e74b674b89a63da73ccd6976c8255bae7810fe4" exitCode=0 Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.191410 4801 generic.go:334] "Generic (PLEG): container finished" podID="ad020bd8-f121-4606-94d5-c67546885c5b" containerID="e4f949d33959ab6b51c4d9410c64f70a619b5bf9595aac026a694a1cbfdca19a" exitCode=2 Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.191433 4801 generic.go:334] "Generic (PLEG): container finished" podID="ad020bd8-f121-4606-94d5-c67546885c5b" containerID="93780e205618db7bb768dd5eb952266589350a76a78549d4ab36a1e321a736c3" exitCode=0 Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.191415 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerDied","Data":"c4a053d45b244f4d07e058916e74b674b89a63da73ccd6976c8255bae7810fe4"} Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.191485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerDied","Data":"e4f949d33959ab6b51c4d9410c64f70a619b5bf9595aac026a694a1cbfdca19a"} Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.191723 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerDied","Data":"93780e205618db7bb768dd5eb952266589350a76a78549d4ab36a1e321a736c3"} Dec 06 03:29:53 crc kubenswrapper[4801]: I1206 03:29:53.193947 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" event={"ID":"23456664-b3cb-40c4-a0a1-a944eef10179","Type":"ContainerStarted","Data":"0b3cc23b79243a74ecad6499497fb48a0a57df82fb2f6070413c9f8149e8d7e1"} Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.215247 4801 generic.go:334] "Generic (PLEG): container finished" podID="ad020bd8-f121-4606-94d5-c67546885c5b" containerID="e32fbee07f8dd7e06892298544d0ca82bef0f7a2b7c80580e16c447a69f7c860" exitCode=0 Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.223314 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerDied","Data":"e32fbee07f8dd7e06892298544d0ca82bef0f7a2b7c80580e16c447a69f7c860"} Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.517585 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.539325 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" podStartSLOduration=4.230211515 podStartE2EDuration="33.539305724s" podCreationTimestamp="2025-12-06 03:29:22 +0000 UTC" firstStartedPulling="2025-12-06 03:29:23.275975426 +0000 UTC m=+1416.398582998" lastFinishedPulling="2025-12-06 03:29:52.585069625 +0000 UTC m=+1445.707677207" observedRunningTime="2025-12-06 03:29:53.219529849 +0000 UTC m=+1446.342137421" watchObservedRunningTime="2025-12-06 03:29:55.539305724 +0000 UTC m=+1448.661913296" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.590609 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-sg-core-conf-yaml\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.590667 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-config-data\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.590714 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-run-httpd\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.590784 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-595q9\" (UniqueName: \"kubernetes.io/projected/ad020bd8-f121-4606-94d5-c67546885c5b-kube-api-access-595q9\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.590859 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-combined-ca-bundle\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.590907 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-scripts\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.591010 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-log-httpd\") pod \"ad020bd8-f121-4606-94d5-c67546885c5b\" (UID: \"ad020bd8-f121-4606-94d5-c67546885c5b\") " Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.591389 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.592376 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.609769 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad020bd8-f121-4606-94d5-c67546885c5b-kube-api-access-595q9" (OuterVolumeSpecName: "kube-api-access-595q9") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "kube-api-access-595q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.610988 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-scripts" (OuterVolumeSpecName: "scripts") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.622492 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.659311 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.681370 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-config-data" (OuterVolumeSpecName: "config-data") pod "ad020bd8-f121-4606-94d5-c67546885c5b" (UID: "ad020bd8-f121-4606-94d5-c67546885c5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.693894 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.693981 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.693995 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.694008 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ad020bd8-f121-4606-94d5-c67546885c5b-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.694021 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-595q9\" (UniqueName: \"kubernetes.io/projected/ad020bd8-f121-4606-94d5-c67546885c5b-kube-api-access-595q9\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.694032 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:55 crc kubenswrapper[4801]: I1206 03:29:55.694041 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad020bd8-f121-4606-94d5-c67546885c5b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.232623 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ad020bd8-f121-4606-94d5-c67546885c5b","Type":"ContainerDied","Data":"b4faa8a4c68fa34c6a3eea74d2dd00fdf23dcd905902865d380a4349e547488d"} Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.232706 4801 scope.go:117] "RemoveContainer" containerID="c4a053d45b244f4d07e058916e74b674b89a63da73ccd6976c8255bae7810fe4" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.232823 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.261993 4801 scope.go:117] "RemoveContainer" containerID="e4f949d33959ab6b51c4d9410c64f70a619b5bf9595aac026a694a1cbfdca19a" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.285942 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.289632 4801 scope.go:117] "RemoveContainer" containerID="93780e205618db7bb768dd5eb952266589350a76a78549d4ab36a1e321a736c3" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.298631 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.322681 4801 scope.go:117] "RemoveContainer" containerID="e32fbee07f8dd7e06892298544d0ca82bef0f7a2b7c80580e16c447a69f7c860" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.329146 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:56 crc kubenswrapper[4801]: E1206 03:29:56.329917 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-api" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.329946 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-api" Dec 06 03:29:56 crc kubenswrapper[4801]: E1206 03:29:56.329978 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.329996 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" Dec 06 03:29:56 crc kubenswrapper[4801]: E1206 03:29:56.330022 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="sg-core" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330034 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="sg-core" Dec 06 03:29:56 crc kubenswrapper[4801]: E1206 03:29:56.330050 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-central-agent" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330061 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-central-agent" Dec 06 03:29:56 crc kubenswrapper[4801]: E1206 03:29:56.330100 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="proxy-httpd" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330111 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="proxy-httpd" Dec 06 03:29:56 crc kubenswrapper[4801]: E1206 03:29:56.330129 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-notification-agent" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330139 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-notification-agent" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330467 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-notification-agent" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330489 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="ceilometer-central-agent" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330506 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="sg-core" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330521 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" containerName="proxy-httpd" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330535 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-api" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.330553 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d93d32ae-f984-4eac-9fdf-80479f40f4bb" containerName="neutron-httpd" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.334053 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.336387 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.341407 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.364223 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.409364 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-log-httpd\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.409820 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-run-httpd\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.409947 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-scripts\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.410036 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.410237 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-config-data\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.410366 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6gs7\" (UniqueName: \"kubernetes.io/projected/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-kube-api-access-x6gs7\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.410540 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512344 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512420 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-log-httpd\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512448 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-run-httpd\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512479 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-scripts\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512570 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-config-data\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.512600 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6gs7\" (UniqueName: \"kubernetes.io/projected/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-kube-api-access-x6gs7\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.513276 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-log-httpd\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.513397 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-run-httpd\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.520488 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-scripts\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.520615 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.524285 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.524376 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-config-data\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.542358 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6gs7\" (UniqueName: \"kubernetes.io/projected/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-kube-api-access-x6gs7\") pod \"ceilometer-0\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.663321 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:29:56 crc kubenswrapper[4801]: I1206 03:29:56.911157 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:29:56 crc kubenswrapper[4801]: W1206 03:29:56.912734 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b2dfd6b_63eb_4622_bee5_5f9f77be2d25.slice/crio-3d15a363b82f27c480840a921ec5741415a3f4166c0ce0c226567e211206fe0d WatchSource:0}: Error finding container 3d15a363b82f27c480840a921ec5741415a3f4166c0ce0c226567e211206fe0d: Status 404 returned error can't find the container with id 3d15a363b82f27c480840a921ec5741415a3f4166c0ce0c226567e211206fe0d Dec 06 03:29:57 crc kubenswrapper[4801]: I1206 03:29:57.233248 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad020bd8-f121-4606-94d5-c67546885c5b" path="/var/lib/kubelet/pods/ad020bd8-f121-4606-94d5-c67546885c5b/volumes" Dec 06 03:29:57 crc kubenswrapper[4801]: I1206 03:29:57.249785 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerStarted","Data":"3d15a363b82f27c480840a921ec5741415a3f4166c0ce0c226567e211206fe0d"} Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.160090 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn"] Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.163280 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.170721 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.180318 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn"] Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.217401 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.286482 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerStarted","Data":"e15eba1aac2d9afe4e29ad61c9685f25154050cdbc88e99d9c94676ba5156ac4"} Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.318120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-secret-volume\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.318171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-config-volume\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.318353 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2fqs\" (UniqueName: \"kubernetes.io/projected/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-kube-api-access-h2fqs\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.419958 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-secret-volume\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.420181 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-config-volume\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.420295 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2fqs\" (UniqueName: \"kubernetes.io/projected/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-kube-api-access-h2fqs\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.422692 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-config-volume\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.436293 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-secret-volume\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.438472 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2fqs\" (UniqueName: \"kubernetes.io/projected/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-kube-api-access-h2fqs\") pod \"collect-profiles-29416530-tpmtn\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:00 crc kubenswrapper[4801]: I1206 03:30:00.536886 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:01 crc kubenswrapper[4801]: I1206 03:30:01.006352 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn"] Dec 06 03:30:01 crc kubenswrapper[4801]: I1206 03:30:01.298638 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" event={"ID":"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae","Type":"ContainerStarted","Data":"f971c1e3133e92c2ffa3e7492bcc55a78a4ef9b84e00d1360fb6ef6dbb495a26"} Dec 06 03:30:02 crc kubenswrapper[4801]: I1206 03:30:02.309318 4801 generic.go:334] "Generic (PLEG): container finished" podID="6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" containerID="cbeea869cfbb8ef3307d96b25a5bd1a5e45865480bb47fbb13ebfa5fdfcbb5ff" exitCode=0 Dec 06 03:30:02 crc kubenswrapper[4801]: I1206 03:30:02.309397 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" event={"ID":"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae","Type":"ContainerDied","Data":"cbeea869cfbb8ef3307d96b25a5bd1a5e45865480bb47fbb13ebfa5fdfcbb5ff"} Dec 06 03:30:02 crc kubenswrapper[4801]: I1206 03:30:02.312671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerStarted","Data":"3f742b499dc6803d0792d2f3c7987d723d4978f88dc2c7de83d252f47ecae10b"} Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.323886 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerStarted","Data":"9b8e069250485dc914d17ec9725456da8517caf13b40b803b3aaea4b316a16e6"} Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.702557 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.786083 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2fqs\" (UniqueName: \"kubernetes.io/projected/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-kube-api-access-h2fqs\") pod \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.786296 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-secret-volume\") pod \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.786366 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-config-volume\") pod \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\" (UID: \"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae\") " Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.787842 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" (UID: "6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.794052 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-kube-api-access-h2fqs" (OuterVolumeSpecName: "kube-api-access-h2fqs") pod "6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" (UID: "6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae"). InnerVolumeSpecName "kube-api-access-h2fqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.794798 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" (UID: "6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.888456 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2fqs\" (UniqueName: \"kubernetes.io/projected/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-kube-api-access-h2fqs\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.888797 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:03 crc kubenswrapper[4801]: I1206 03:30:03.888862 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:04 crc kubenswrapper[4801]: I1206 03:30:04.337281 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" event={"ID":"6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae","Type":"ContainerDied","Data":"f971c1e3133e92c2ffa3e7492bcc55a78a4ef9b84e00d1360fb6ef6dbb495a26"} Dec 06 03:30:04 crc kubenswrapper[4801]: I1206 03:30:04.337891 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f971c1e3133e92c2ffa3e7492bcc55a78a4ef9b84e00d1360fb6ef6dbb495a26" Dec 06 03:30:04 crc kubenswrapper[4801]: I1206 03:30:04.337348 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn" Dec 06 03:30:05 crc kubenswrapper[4801]: I1206 03:30:05.352113 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerStarted","Data":"d821f520fb9e298f4f8e85bc704ce1805e6061130d86b73625bb0e21ffcbae5b"} Dec 06 03:30:05 crc kubenswrapper[4801]: I1206 03:30:05.353360 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:30:05 crc kubenswrapper[4801]: I1206 03:30:05.402928 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.088182295 podStartE2EDuration="9.402901349s" podCreationTimestamp="2025-12-06 03:29:56 +0000 UTC" firstStartedPulling="2025-12-06 03:29:56.915686432 +0000 UTC m=+1450.038294014" lastFinishedPulling="2025-12-06 03:30:04.230405496 +0000 UTC m=+1457.353013068" observedRunningTime="2025-12-06 03:30:05.38318546 +0000 UTC m=+1458.505793072" watchObservedRunningTime="2025-12-06 03:30:05.402901349 +0000 UTC m=+1458.525508921" Dec 06 03:30:16 crc kubenswrapper[4801]: I1206 03:30:16.465809 4801 generic.go:334] "Generic (PLEG): container finished" podID="23456664-b3cb-40c4-a0a1-a944eef10179" containerID="0b3cc23b79243a74ecad6499497fb48a0a57df82fb2f6070413c9f8149e8d7e1" exitCode=0 Dec 06 03:30:16 crc kubenswrapper[4801]: I1206 03:30:16.465868 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" event={"ID":"23456664-b3cb-40c4-a0a1-a944eef10179","Type":"ContainerDied","Data":"0b3cc23b79243a74ecad6499497fb48a0a57df82fb2f6070413c9f8149e8d7e1"} Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.789556 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.857071 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-scripts\") pod \"23456664-b3cb-40c4-a0a1-a944eef10179\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.857142 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-config-data\") pod \"23456664-b3cb-40c4-a0a1-a944eef10179\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.857183 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-combined-ca-bundle\") pod \"23456664-b3cb-40c4-a0a1-a944eef10179\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.857215 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6k9m\" (UniqueName: \"kubernetes.io/projected/23456664-b3cb-40c4-a0a1-a944eef10179-kube-api-access-l6k9m\") pod \"23456664-b3cb-40c4-a0a1-a944eef10179\" (UID: \"23456664-b3cb-40c4-a0a1-a944eef10179\") " Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.862740 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23456664-b3cb-40c4-a0a1-a944eef10179-kube-api-access-l6k9m" (OuterVolumeSpecName: "kube-api-access-l6k9m") pod "23456664-b3cb-40c4-a0a1-a944eef10179" (UID: "23456664-b3cb-40c4-a0a1-a944eef10179"). InnerVolumeSpecName "kube-api-access-l6k9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.868059 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-scripts" (OuterVolumeSpecName: "scripts") pod "23456664-b3cb-40c4-a0a1-a944eef10179" (UID: "23456664-b3cb-40c4-a0a1-a944eef10179"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.888564 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-config-data" (OuterVolumeSpecName: "config-data") pod "23456664-b3cb-40c4-a0a1-a944eef10179" (UID: "23456664-b3cb-40c4-a0a1-a944eef10179"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.890525 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23456664-b3cb-40c4-a0a1-a944eef10179" (UID: "23456664-b3cb-40c4-a0a1-a944eef10179"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.958571 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.958603 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6k9m\" (UniqueName: \"kubernetes.io/projected/23456664-b3cb-40c4-a0a1-a944eef10179-kube-api-access-l6k9m\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.958619 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:17 crc kubenswrapper[4801]: I1206 03:30:17.958631 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23456664-b3cb-40c4-a0a1-a944eef10179-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.482383 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" event={"ID":"23456664-b3cb-40c4-a0a1-a944eef10179","Type":"ContainerDied","Data":"30b248710982419777a7b7facf2a069af78a80dc16b62c836b4199d3e5a935b0"} Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.482731 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30b248710982419777a7b7facf2a069af78a80dc16b62c836b4199d3e5a935b0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.482438 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7h8zf" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.588836 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 03:30:18 crc kubenswrapper[4801]: E1206 03:30:18.589178 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23456664-b3cb-40c4-a0a1-a944eef10179" containerName="nova-cell0-conductor-db-sync" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.589194 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="23456664-b3cb-40c4-a0a1-a944eef10179" containerName="nova-cell0-conductor-db-sync" Dec 06 03:30:18 crc kubenswrapper[4801]: E1206 03:30:18.589228 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" containerName="collect-profiles" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.589235 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" containerName="collect-profiles" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.589401 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="23456664-b3cb-40c4-a0a1-a944eef10179" containerName="nova-cell0-conductor-db-sync" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.589418 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" containerName="collect-profiles" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.589981 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.592422 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.598509 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.600081 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m7qht" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.771501 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36edc3c9-457a-498f-9938-41f98c8b1491-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.771560 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdzn8\" (UniqueName: \"kubernetes.io/projected/36edc3c9-457a-498f-9938-41f98c8b1491-kube-api-access-pdzn8\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.771952 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36edc3c9-457a-498f-9938-41f98c8b1491-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.873557 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36edc3c9-457a-498f-9938-41f98c8b1491-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.873679 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36edc3c9-457a-498f-9938-41f98c8b1491-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.873698 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdzn8\" (UniqueName: \"kubernetes.io/projected/36edc3c9-457a-498f-9938-41f98c8b1491-kube-api-access-pdzn8\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.878746 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36edc3c9-457a-498f-9938-41f98c8b1491-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.882029 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36edc3c9-457a-498f-9938-41f98c8b1491-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.892702 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdzn8\" (UniqueName: \"kubernetes.io/projected/36edc3c9-457a-498f-9938-41f98c8b1491-kube-api-access-pdzn8\") pod \"nova-cell0-conductor-0\" (UID: \"36edc3c9-457a-498f-9938-41f98c8b1491\") " pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:18 crc kubenswrapper[4801]: I1206 03:30:18.909050 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:19 crc kubenswrapper[4801]: I1206 03:30:19.347171 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 03:30:19 crc kubenswrapper[4801]: I1206 03:30:19.491234 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"36edc3c9-457a-498f-9938-41f98c8b1491","Type":"ContainerStarted","Data":"f8fe1b41c61640da6b4b8f7ee5b36b3dc9b555c0cdb0940252606b6395145570"} Dec 06 03:30:20 crc kubenswrapper[4801]: I1206 03:30:20.502335 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"36edc3c9-457a-498f-9938-41f98c8b1491","Type":"ContainerStarted","Data":"5b85a05cdafa78c410db7ac5dd5eab759fc58acf8a2d5ad1e39f2ce72bd42bb2"} Dec 06 03:30:20 crc kubenswrapper[4801]: I1206 03:30:20.502622 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:20 crc kubenswrapper[4801]: I1206 03:30:20.527206 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.527187002 podStartE2EDuration="2.527187002s" podCreationTimestamp="2025-12-06 03:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:20.518297399 +0000 UTC m=+1473.640904991" watchObservedRunningTime="2025-12-06 03:30:20.527187002 +0000 UTC m=+1473.649794574" Dec 06 03:30:26 crc kubenswrapper[4801]: I1206 03:30:26.669711 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 03:30:28 crc kubenswrapper[4801]: I1206 03:30:28.939298 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.349866 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.350094 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f3419971-0654-47d2-befb-5afb0761011c" containerName="kube-state-metrics" containerID="cri-o://dcfe6030a41d831e174e17880ed423d88393f57a70a8f6e8f34e6a2cbe5ff58b" gracePeriod=30 Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.458159 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-tpqcc"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.459530 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.461953 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.462167 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.478100 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tpqcc"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.598601 4801 generic.go:334] "Generic (PLEG): container finished" podID="f3419971-0654-47d2-befb-5afb0761011c" containerID="dcfe6030a41d831e174e17880ed423d88393f57a70a8f6e8f34e6a2cbe5ff58b" exitCode=2 Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.598668 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3419971-0654-47d2-befb-5afb0761011c","Type":"ContainerDied","Data":"dcfe6030a41d831e174e17880ed423d88393f57a70a8f6e8f34e6a2cbe5ff58b"} Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.601293 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-scripts\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.601371 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9w7p\" (UniqueName: \"kubernetes.io/projected/bfb686be-6bac-49fa-a164-543b9c1d7952-kube-api-access-q9w7p\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.601543 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.601593 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-config-data\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.648397 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.651819 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.654944 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.664146 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715517 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5baabf2-3764-4291-91f5-088aa7aae099-logs\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715575 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715592 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-config-data\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715614 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-config-data\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715665 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715730 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-scripts\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9w7p\" (UniqueName: \"kubernetes.io/projected/bfb686be-6bac-49fa-a164-543b9c1d7952-kube-api-access-q9w7p\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.715807 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w49gl\" (UniqueName: \"kubernetes.io/projected/e5baabf2-3764-4291-91f5-088aa7aae099-kube-api-access-w49gl\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.724241 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-scripts\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.725118 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-config-data\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.727866 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.737220 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.738610 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.742585 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.780530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9w7p\" (UniqueName: \"kubernetes.io/projected/bfb686be-6bac-49fa-a164-543b9c1d7952-kube-api-access-q9w7p\") pod \"nova-cell0-cell-mapping-tpqcc\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.789607 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823159 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-config-data\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823676 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823829 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qj4\" (UniqueName: \"kubernetes.io/projected/e4314805-d80b-4ba2-8033-2c49ae745009-kube-api-access-55qj4\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823871 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823908 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w49gl\" (UniqueName: \"kubernetes.io/projected/e5baabf2-3764-4291-91f5-088aa7aae099-kube-api-access-w49gl\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823957 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5baabf2-3764-4291-91f5-088aa7aae099-logs\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.823987 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-config-data\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.825002 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.827185 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5baabf2-3764-4291-91f5-088aa7aae099-logs\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.828646 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-config-data\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.845402 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.871846 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w49gl\" (UniqueName: \"kubernetes.io/projected/e5baabf2-3764-4291-91f5-088aa7aae099-kube-api-access-w49gl\") pod \"nova-api-0\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.919033 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.920196 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.927470 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55qj4\" (UniqueName: \"kubernetes.io/projected/e4314805-d80b-4ba2-8033-2c49ae745009-kube-api-access-55qj4\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.927540 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.928239 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.928340 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4z4r\" (UniqueName: \"kubernetes.io/projected/f001cd4d-b7de-410b-af82-1e38fe590a21-kube-api-access-d4z4r\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.928449 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.928593 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-config-data\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.931034 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.935151 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.940359 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-config-data\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.978009 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qj4\" (UniqueName: \"kubernetes.io/projected/e4314805-d80b-4ba2-8033-2c49ae745009-kube-api-access-55qj4\") pod \"nova-scheduler-0\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.982301 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.986633 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.987822 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:30:29 crc kubenswrapper[4801]: I1206 03:30:29.989142 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.017982 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.035338 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.035516 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4z4r\" (UniqueName: \"kubernetes.io/projected/f001cd4d-b7de-410b-af82-1e38fe590a21-kube-api-access-d4z4r\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.035692 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.039123 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.041683 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.045398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.046011 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.072361 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4z4r\" (UniqueName: \"kubernetes.io/projected/f001cd4d-b7de-410b-af82-1e38fe590a21-kube-api-access-d4z4r\") pod \"nova-cell1-novncproxy-0\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.075491 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-k2lqg"] Dec 06 03:30:30 crc kubenswrapper[4801]: E1206 03:30:30.075932 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3419971-0654-47d2-befb-5afb0761011c" containerName="kube-state-metrics" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.075949 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3419971-0654-47d2-befb-5afb0761011c" containerName="kube-state-metrics" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.076143 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3419971-0654-47d2-befb-5afb0761011c" containerName="kube-state-metrics" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.077229 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.137356 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-config-data\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.137430 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv9hx\" (UniqueName: \"kubernetes.io/projected/0a8b52cf-722d-47ee-9942-f28c95eb337d-kube-api-access-nv9hx\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.137546 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.137571 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8b52cf-722d-47ee-9942-f28c95eb337d-logs\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.142314 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-k2lqg"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.246929 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.247131 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4rnl\" (UniqueName: \"kubernetes.io/projected/f3419971-0654-47d2-befb-5afb0761011c-kube-api-access-w4rnl\") pod \"f3419971-0654-47d2-befb-5afb0761011c\" (UID: \"f3419971-0654-47d2-befb-5afb0761011c\") " Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.248271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.248336 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv9hx\" (UniqueName: \"kubernetes.io/projected/0a8b52cf-722d-47ee-9942-f28c95eb337d-kube-api-access-nv9hx\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.248421 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-dns-svc\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.248525 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s7r7\" (UniqueName: \"kubernetes.io/projected/f563028e-6f64-4540-9043-f9961c26e81c-kube-api-access-4s7r7\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.248638 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.249453 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.249481 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8b52cf-722d-47ee-9942-f28c95eb337d-logs\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.249588 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-config\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.249615 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-config-data\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.250518 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8b52cf-722d-47ee-9942-f28c95eb337d-logs\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.251327 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.254007 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-config-data\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.255317 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.258604 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3419971-0654-47d2-befb-5afb0761011c-kube-api-access-w4rnl" (OuterVolumeSpecName: "kube-api-access-w4rnl") pod "f3419971-0654-47d2-befb-5afb0761011c" (UID: "f3419971-0654-47d2-befb-5afb0761011c"). InnerVolumeSpecName "kube-api-access-w4rnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.268517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv9hx\" (UniqueName: \"kubernetes.io/projected/0a8b52cf-722d-47ee-9942-f28c95eb337d-kube-api-access-nv9hx\") pod \"nova-metadata-0\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.312485 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.350748 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s7r7\" (UniqueName: \"kubernetes.io/projected/f563028e-6f64-4540-9043-f9961c26e81c-kube-api-access-4s7r7\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.350840 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.350923 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-config\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.350964 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.350999 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-dns-svc\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.351050 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4rnl\" (UniqueName: \"kubernetes.io/projected/f3419971-0654-47d2-befb-5afb0761011c-kube-api-access-w4rnl\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.352530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-config\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.353400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.353457 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-dns-svc\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.353947 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.377281 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s7r7\" (UniqueName: \"kubernetes.io/projected/f563028e-6f64-4540-9043-f9961c26e81c-kube-api-access-4s7r7\") pod \"dnsmasq-dns-566b5b7845-k2lqg\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.441170 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.582983 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-tpqcc"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.629717 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f3419971-0654-47d2-befb-5afb0761011c","Type":"ContainerDied","Data":"893e6516bc8141a6f5bcfe836bfb1b7afdc547bcfaaca04bf438629084df93f7"} Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.634307 4801 scope.go:117] "RemoveContainer" containerID="dcfe6030a41d831e174e17880ed423d88393f57a70a8f6e8f34e6a2cbe5ff58b" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.630859 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.659390 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.738203 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v9x44"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.741409 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.746579 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.746912 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.773830 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v9x44"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.802042 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.835463 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.873034 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-scripts\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.873094 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.873193 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jjt\" (UniqueName: \"kubernetes.io/projected/064e28c8-c61c-4012-8e99-c5996a34ff9d-kube-api-access-f6jjt\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.873233 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-config-data\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.887262 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.888490 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.891027 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.891219 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.912669 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.954443 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976337 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jjt\" (UniqueName: \"kubernetes.io/projected/064e28c8-c61c-4012-8e99-c5996a34ff9d-kube-api-access-f6jjt\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976561 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976601 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976647 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-config-data\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976686 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4kft\" (UniqueName: \"kubernetes.io/projected/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-api-access-w4kft\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976797 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976831 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-scripts\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.976861 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.982368 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.982670 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-config-data\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.985700 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-scripts\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:30 crc kubenswrapper[4801]: I1206 03:30:30.999349 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jjt\" (UniqueName: \"kubernetes.io/projected/064e28c8-c61c-4012-8e99-c5996a34ff9d-kube-api-access-f6jjt\") pod \"nova-cell1-conductor-db-sync-v9x44\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.076971 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.078108 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.078451 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.078496 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4kft\" (UniqueName: \"kubernetes.io/projected/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-api-access-w4kft\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.078573 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.082966 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.082988 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.085599 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.097843 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4kft\" (UniqueName: \"kubernetes.io/projected/a9b3e048-d2e7-43ef-bac9-cc9536b8c06d-kube-api-access-w4kft\") pod \"kube-state-metrics-0\" (UID: \"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d\") " pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.181217 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:30:31 crc kubenswrapper[4801]: W1206 03:30:31.181859 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a8b52cf_722d_47ee_9942_f28c95eb337d.slice/crio-83e5b57303c7f93f7543dab0ad1a5ed685f6971b7df5ed64e89f8e89d1848cbf WatchSource:0}: Error finding container 83e5b57303c7f93f7543dab0ad1a5ed685f6971b7df5ed64e89f8e89d1848cbf: Status 404 returned error can't find the container with id 83e5b57303c7f93f7543dab0ad1a5ed685f6971b7df5ed64e89f8e89d1848cbf Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.231628 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.232414 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3419971-0654-47d2-befb-5afb0761011c" path="/var/lib/kubelet/pods/f3419971-0654-47d2-befb-5afb0761011c/volumes" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.265892 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-k2lqg"] Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.270538 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:30:31 crc kubenswrapper[4801]: W1206 03:30:31.283011 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf001cd4d_b7de_410b_af82_1e38fe590a21.slice/crio-5b4837478032d6f228213492fc29c716efb6e8cb27c990cd1e4941a4c5cb822e WatchSource:0}: Error finding container 5b4837478032d6f228213492fc29c716efb6e8cb27c990cd1e4941a4c5cb822e: Status 404 returned error can't find the container with id 5b4837478032d6f228213492fc29c716efb6e8cb27c990cd1e4941a4c5cb822e Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.416543 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.417172 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-central-agent" containerID="cri-o://e15eba1aac2d9afe4e29ad61c9685f25154050cdbc88e99d9c94676ba5156ac4" gracePeriod=30 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.417897 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="proxy-httpd" containerID="cri-o://d821f520fb9e298f4f8e85bc704ce1805e6061130d86b73625bb0e21ffcbae5b" gracePeriod=30 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.418519 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-notification-agent" containerID="cri-o://3f742b499dc6803d0792d2f3c7987d723d4978f88dc2c7de83d252f47ecae10b" gracePeriod=30 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.418747 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="sg-core" containerID="cri-o://9b8e069250485dc914d17ec9725456da8517caf13b40b803b3aaea4b316a16e6" gracePeriod=30 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.619348 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v9x44"] Dec 06 03:30:31 crc kubenswrapper[4801]: W1206 03:30:31.649113 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod064e28c8_c61c_4012_8e99_c5996a34ff9d.slice/crio-84b282b800559b3a17f41daa6a94875a9ccead41048ab3b80058c5468c7a5fa5 WatchSource:0}: Error finding container 84b282b800559b3a17f41daa6a94875a9ccead41048ab3b80058c5468c7a5fa5: Status 404 returned error can't find the container with id 84b282b800559b3a17f41daa6a94875a9ccead41048ab3b80058c5468c7a5fa5 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.656410 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" event={"ID":"f563028e-6f64-4540-9043-f9961c26e81c","Type":"ContainerStarted","Data":"650b8aaa28dba205e14617f0455e8eb3e9676fca4c3838436d616073395d294f"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.657592 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5baabf2-3764-4291-91f5-088aa7aae099","Type":"ContainerStarted","Data":"9735f089aaae2c322828b3af84720d835ec4cd1d93a905986d6d70e75b5d3e2a"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.660569 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tpqcc" event={"ID":"bfb686be-6bac-49fa-a164-543b9c1d7952","Type":"ContainerStarted","Data":"8d1503562c3bdec19876a33a3799370f551cf64b1bc92a9141c81eb24797df24"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.660615 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tpqcc" event={"ID":"bfb686be-6bac-49fa-a164-543b9c1d7952","Type":"ContainerStarted","Data":"900ea18652c78c774984742fef0a468a95cce92f5f42b76609ff0366f12049e7"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.663803 4801 generic.go:334] "Generic (PLEG): container finished" podID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerID="d821f520fb9e298f4f8e85bc704ce1805e6061130d86b73625bb0e21ffcbae5b" exitCode=0 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.663828 4801 generic.go:334] "Generic (PLEG): container finished" podID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerID="9b8e069250485dc914d17ec9725456da8517caf13b40b803b3aaea4b316a16e6" exitCode=2 Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.663865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerDied","Data":"d821f520fb9e298f4f8e85bc704ce1805e6061130d86b73625bb0e21ffcbae5b"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.663890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerDied","Data":"9b8e069250485dc914d17ec9725456da8517caf13b40b803b3aaea4b316a16e6"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.665677 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a8b52cf-722d-47ee-9942-f28c95eb337d","Type":"ContainerStarted","Data":"83e5b57303c7f93f7543dab0ad1a5ed685f6971b7df5ed64e89f8e89d1848cbf"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.666705 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f001cd4d-b7de-410b-af82-1e38fe590a21","Type":"ContainerStarted","Data":"5b4837478032d6f228213492fc29c716efb6e8cb27c990cd1e4941a4c5cb822e"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.669396 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4314805-d80b-4ba2-8033-2c49ae745009","Type":"ContainerStarted","Data":"bf102432a2b31321cc92ea63d40cb506e30a1590d68f2ee1c9f137852d56f514"} Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.681617 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-tpqcc" podStartSLOduration=2.6815820649999997 podStartE2EDuration="2.681582065s" podCreationTimestamp="2025-12-06 03:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:31.678221683 +0000 UTC m=+1484.800829265" watchObservedRunningTime="2025-12-06 03:30:31.681582065 +0000 UTC m=+1484.804189637" Dec 06 03:30:31 crc kubenswrapper[4801]: I1206 03:30:31.836370 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 03:30:31 crc kubenswrapper[4801]: W1206 03:30:31.861158 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9b3e048_d2e7_43ef_bac9_cc9536b8c06d.slice/crio-4abc7522ea43827a70d379373d5b2d1f2e905b83bafeb6adec971c46e9973154 WatchSource:0}: Error finding container 4abc7522ea43827a70d379373d5b2d1f2e905b83bafeb6adec971c46e9973154: Status 404 returned error can't find the container with id 4abc7522ea43827a70d379373d5b2d1f2e905b83bafeb6adec971c46e9973154 Dec 06 03:30:31 crc kubenswrapper[4801]: E1206 03:30:31.880579 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf563028e_6f64_4540_9043_f9961c26e81c.slice/crio-5ed7d2ad75d030f874863a39b9d8082d6c286956c6fc5cc03412b370ea1df777.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b2dfd6b_63eb_4622_bee5_5f9f77be2d25.slice/crio-e15eba1aac2d9afe4e29ad61c9685f25154050cdbc88e99d9c94676ba5156ac4.scope\": RecentStats: unable to find data in memory cache]" Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.692624 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v9x44" event={"ID":"064e28c8-c61c-4012-8e99-c5996a34ff9d","Type":"ContainerStarted","Data":"5d4dbeae13a04a600dd8399caf51523306daeeab0298ecadb5af907c42c87ff8"} Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.693228 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v9x44" event={"ID":"064e28c8-c61c-4012-8e99-c5996a34ff9d","Type":"ContainerStarted","Data":"84b282b800559b3a17f41daa6a94875a9ccead41048ab3b80058c5468c7a5fa5"} Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.698271 4801 generic.go:334] "Generic (PLEG): container finished" podID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerID="e15eba1aac2d9afe4e29ad61c9685f25154050cdbc88e99d9c94676ba5156ac4" exitCode=0 Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.698341 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerDied","Data":"e15eba1aac2d9afe4e29ad61c9685f25154050cdbc88e99d9c94676ba5156ac4"} Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.700638 4801 generic.go:334] "Generic (PLEG): container finished" podID="f563028e-6f64-4540-9043-f9961c26e81c" containerID="5ed7d2ad75d030f874863a39b9d8082d6c286956c6fc5cc03412b370ea1df777" exitCode=0 Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.700709 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" event={"ID":"f563028e-6f64-4540-9043-f9961c26e81c","Type":"ContainerDied","Data":"5ed7d2ad75d030f874863a39b9d8082d6c286956c6fc5cc03412b370ea1df777"} Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.711134 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d","Type":"ContainerStarted","Data":"4abc7522ea43827a70d379373d5b2d1f2e905b83bafeb6adec971c46e9973154"} Dec 06 03:30:32 crc kubenswrapper[4801]: I1206 03:30:32.728633 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-v9x44" podStartSLOduration=2.728607135 podStartE2EDuration="2.728607135s" podCreationTimestamp="2025-12-06 03:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:32.710108679 +0000 UTC m=+1485.832716271" watchObservedRunningTime="2025-12-06 03:30:32.728607135 +0000 UTC m=+1485.851214707" Dec 06 03:30:33 crc kubenswrapper[4801]: I1206 03:30:33.615149 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:30:33 crc kubenswrapper[4801]: I1206 03:30:33.622048 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:30:35 crc kubenswrapper[4801]: I1206 03:30:35.754819 4801 generic.go:334] "Generic (PLEG): container finished" podID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerID="3f742b499dc6803d0792d2f3c7987d723d4978f88dc2c7de83d252f47ecae10b" exitCode=0 Dec 06 03:30:35 crc kubenswrapper[4801]: I1206 03:30:35.755833 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerDied","Data":"3f742b499dc6803d0792d2f3c7987d723d4978f88dc2c7de83d252f47ecae10b"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.125147 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257284 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-log-httpd\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257402 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-run-httpd\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257497 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-scripts\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257570 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-sg-core-conf-yaml\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257661 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6gs7\" (UniqueName: \"kubernetes.io/projected/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-kube-api-access-x6gs7\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257696 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-config-data\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.257729 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-combined-ca-bundle\") pod \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\" (UID: \"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25\") " Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.258098 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.258250 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.259145 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.267129 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-scripts" (OuterVolumeSpecName: "scripts") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.273234 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-kube-api-access-x6gs7" (OuterVolumeSpecName: "kube-api-access-x6gs7") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "kube-api-access-x6gs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.361350 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6gs7\" (UniqueName: \"kubernetes.io/projected/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-kube-api-access-x6gs7\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.361381 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.361390 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.444508 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.463969 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.521296 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.530832 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-config-data" (OuterVolumeSpecName: "config-data") pod "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" (UID: "8b2dfd6b-63eb-4622-bee5-5f9f77be2d25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.566185 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.566241 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.772425 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4314805-d80b-4ba2-8033-2c49ae745009","Type":"ContainerStarted","Data":"d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.775045 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" event={"ID":"f563028e-6f64-4540-9043-f9961c26e81c","Type":"ContainerStarted","Data":"c684c0bb6dc318947d17ce22631ebd964e31c23c7b053bc867e95f00d742445c"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.775124 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.777181 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a9b3e048-d2e7-43ef-bac9-cc9536b8c06d","Type":"ContainerStarted","Data":"3b129973426db596afc18796e2594462b9abf12fb945256481b1825c4a6eea63"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.777304 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.779789 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5baabf2-3764-4291-91f5-088aa7aae099","Type":"ContainerStarted","Data":"23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.779824 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5baabf2-3764-4291-91f5-088aa7aae099","Type":"ContainerStarted","Data":"87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.782770 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b2dfd6b-63eb-4622-bee5-5f9f77be2d25","Type":"ContainerDied","Data":"3d15a363b82f27c480840a921ec5741415a3f4166c0ce0c226567e211206fe0d"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.783072 4801 scope.go:117] "RemoveContainer" containerID="d821f520fb9e298f4f8e85bc704ce1805e6061130d86b73625bb0e21ffcbae5b" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.783248 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.787555 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a8b52cf-722d-47ee-9942-f28c95eb337d","Type":"ContainerStarted","Data":"f41dd903e2c2fb6869c870ddc1c6160a8dff90a46ab3375073ccbc1d37cdfa72"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.787605 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a8b52cf-722d-47ee-9942-f28c95eb337d","Type":"ContainerStarted","Data":"ea708b073a20b942c85ffca1ca28cf29c1b4e27f384fa4a16e0ff87dbbcdb399"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.787781 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-log" containerID="cri-o://ea708b073a20b942c85ffca1ca28cf29c1b4e27f384fa4a16e0ff87dbbcdb399" gracePeriod=30 Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.787876 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-metadata" containerID="cri-o://f41dd903e2c2fb6869c870ddc1c6160a8dff90a46ab3375073ccbc1d37cdfa72" gracePeriod=30 Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.803560 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f001cd4d-b7de-410b-af82-1e38fe590a21","Type":"ContainerStarted","Data":"5f2c158212515aff75d727b2848dede2a728d0631c22dd2169b35ee9f21a6e58"} Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.806594 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f001cd4d-b7de-410b-af82-1e38fe590a21" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5f2c158212515aff75d727b2848dede2a728d0631c22dd2169b35ee9f21a6e58" gracePeriod=30 Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.807434 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.775174264 podStartE2EDuration="7.807415154s" podCreationTimestamp="2025-12-06 03:30:29 +0000 UTC" firstStartedPulling="2025-12-06 03:30:30.961390775 +0000 UTC m=+1484.083998337" lastFinishedPulling="2025-12-06 03:30:35.993631655 +0000 UTC m=+1489.116239227" observedRunningTime="2025-12-06 03:30:36.799416096 +0000 UTC m=+1489.922023668" watchObservedRunningTime="2025-12-06 03:30:36.807415154 +0000 UTC m=+1489.930022726" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.835579 4801 scope.go:117] "RemoveContainer" containerID="9b8e069250485dc914d17ec9725456da8517caf13b40b803b3aaea4b316a16e6" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.844249 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" podStartSLOduration=7.844179479 podStartE2EDuration="7.844179479s" podCreationTimestamp="2025-12-06 03:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:36.828283495 +0000 UTC m=+1489.950891077" watchObservedRunningTime="2025-12-06 03:30:36.844179479 +0000 UTC m=+1489.966787051" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.867627 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.077644446 podStartE2EDuration="7.8676032s" podCreationTimestamp="2025-12-06 03:30:29 +0000 UTC" firstStartedPulling="2025-12-06 03:30:31.207182408 +0000 UTC m=+1484.329789980" lastFinishedPulling="2025-12-06 03:30:35.997141162 +0000 UTC m=+1489.119748734" observedRunningTime="2025-12-06 03:30:36.854875912 +0000 UTC m=+1489.977483484" watchObservedRunningTime="2025-12-06 03:30:36.8676032 +0000 UTC m=+1489.990210792" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.875932 4801 scope.go:117] "RemoveContainer" containerID="3f742b499dc6803d0792d2f3c7987d723d4978f88dc2c7de83d252f47ecae10b" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.886376 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.620109903 podStartE2EDuration="7.886357084s" podCreationTimestamp="2025-12-06 03:30:29 +0000 UTC" firstStartedPulling="2025-12-06 03:30:30.699269345 +0000 UTC m=+1483.821876917" lastFinishedPulling="2025-12-06 03:30:35.965516526 +0000 UTC m=+1489.088124098" observedRunningTime="2025-12-06 03:30:36.885544361 +0000 UTC m=+1490.008151943" watchObservedRunningTime="2025-12-06 03:30:36.886357084 +0000 UTC m=+1490.008964656" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.915080 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.825779591 podStartE2EDuration="6.915057208s" podCreationTimestamp="2025-12-06 03:30:30 +0000 UTC" firstStartedPulling="2025-12-06 03:30:31.904352698 +0000 UTC m=+1485.026960270" lastFinishedPulling="2025-12-06 03:30:35.993630325 +0000 UTC m=+1489.116237887" observedRunningTime="2025-12-06 03:30:36.907200573 +0000 UTC m=+1490.029808165" watchObservedRunningTime="2025-12-06 03:30:36.915057208 +0000 UTC m=+1490.037664770" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.922470 4801 scope.go:117] "RemoveContainer" containerID="e15eba1aac2d9afe4e29ad61c9685f25154050cdbc88e99d9c94676ba5156ac4" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.941739 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.236658926 podStartE2EDuration="7.941704467s" podCreationTimestamp="2025-12-06 03:30:29 +0000 UTC" firstStartedPulling="2025-12-06 03:30:31.290942909 +0000 UTC m=+1484.413550471" lastFinishedPulling="2025-12-06 03:30:35.99598844 +0000 UTC m=+1489.118596012" observedRunningTime="2025-12-06 03:30:36.932980559 +0000 UTC m=+1490.055588131" watchObservedRunningTime="2025-12-06 03:30:36.941704467 +0000 UTC m=+1490.064312039" Dec 06 03:30:36 crc kubenswrapper[4801]: I1206 03:30:36.994130 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.005524 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.015947 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:30:37 crc kubenswrapper[4801]: E1206 03:30:37.016486 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-notification-agent" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.016554 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-notification-agent" Dec 06 03:30:37 crc kubenswrapper[4801]: E1206 03:30:37.016616 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="sg-core" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.016667 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="sg-core" Dec 06 03:30:37 crc kubenswrapper[4801]: E1206 03:30:37.016735 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-central-agent" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.016817 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-central-agent" Dec 06 03:30:37 crc kubenswrapper[4801]: E1206 03:30:37.016896 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="proxy-httpd" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.016951 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="proxy-httpd" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.017171 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-central-agent" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.017234 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="sg-core" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.017294 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="proxy-httpd" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.017349 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" containerName="ceilometer-notification-agent" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.019083 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.021370 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.026098 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.026240 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.026348 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181230 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt8zg\" (UniqueName: \"kubernetes.io/projected/f97bc710-46ba-46a0-bdc8-038e22e68a8f-kube-api-access-xt8zg\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181253 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181396 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-config-data\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181437 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-scripts\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181467 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181566 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.181611 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.223825 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2dfd6b-63eb-4622-bee5-5f9f77be2d25" path="/var/lib/kubelet/pods/8b2dfd6b-63eb-4622-bee5-5f9f77be2d25/volumes" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283372 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-scripts\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283447 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283536 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283565 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283600 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283627 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt8zg\" (UniqueName: \"kubernetes.io/projected/f97bc710-46ba-46a0-bdc8-038e22e68a8f-kube-api-access-xt8zg\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283644 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.283663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-config-data\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.284349 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-log-httpd\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.284344 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-run-httpd\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.289171 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-config-data\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.289960 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.290318 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-scripts\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.290474 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.305543 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.323909 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt8zg\" (UniqueName: \"kubernetes.io/projected/f97bc710-46ba-46a0-bdc8-038e22e68a8f-kube-api-access-xt8zg\") pod \"ceilometer-0\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.344386 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.817322 4801 generic.go:334] "Generic (PLEG): container finished" podID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerID="ea708b073a20b942c85ffca1ca28cf29c1b4e27f384fa4a16e0ff87dbbcdb399" exitCode=143 Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.817517 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a8b52cf-722d-47ee-9942-f28c95eb337d","Type":"ContainerDied","Data":"ea708b073a20b942c85ffca1ca28cf29c1b4e27f384fa4a16e0ff87dbbcdb399"} Dec 06 03:30:37 crc kubenswrapper[4801]: I1206 03:30:37.819195 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:30:38 crc kubenswrapper[4801]: I1206 03:30:38.829987 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerStarted","Data":"e01fbe63de003ac5929a870ea1272614f2df929df74a1d10851c3463e550f579"} Dec 06 03:30:38 crc kubenswrapper[4801]: I1206 03:30:38.830250 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerStarted","Data":"6ab8a71d24c52a8438743acd21e0d528a6bc10cc974e94959b943ffff447fbd6"} Dec 06 03:30:39 crc kubenswrapper[4801]: I1206 03:30:39.843062 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerStarted","Data":"2ff70614331fa589ba3a7ca547f71af86ca8d306220838332527024261bef8cd"} Dec 06 03:30:39 crc kubenswrapper[4801]: I1206 03:30:39.988689 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:30:39 crc kubenswrapper[4801]: I1206 03:30:39.988780 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.248514 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.248675 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.251121 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.280500 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.312791 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.315213 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.859452 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerStarted","Data":"8f1835da72045a39f1a0d0b8b3239379b36aa3b46233bdaea047ae9baf3d9ace"} Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.861901 4801 generic.go:334] "Generic (PLEG): container finished" podID="bfb686be-6bac-49fa-a164-543b9c1d7952" containerID="8d1503562c3bdec19876a33a3799370f551cf64b1bc92a9141c81eb24797df24" exitCode=0 Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.862015 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tpqcc" event={"ID":"bfb686be-6bac-49fa-a164-543b9c1d7952","Type":"ContainerDied","Data":"8d1503562c3bdec19876a33a3799370f551cf64b1bc92a9141c81eb24797df24"} Dec 06 03:30:40 crc kubenswrapper[4801]: I1206 03:30:40.903152 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 03:30:41 crc kubenswrapper[4801]: I1206 03:30:41.071061 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:30:41 crc kubenswrapper[4801]: I1206 03:30:41.071155 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.168:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:30:41 crc kubenswrapper[4801]: I1206 03:30:41.240528 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 03:30:41 crc kubenswrapper[4801]: I1206 03:30:41.872316 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerStarted","Data":"3243113ab86709f8248c7efa7fce2630a3305dd299d3c3212df806c8e6aa3d76"} Dec 06 03:30:41 crc kubenswrapper[4801]: I1206 03:30:41.872644 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:30:41 crc kubenswrapper[4801]: I1206 03:30:41.902742 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7281546 podStartE2EDuration="5.902716928s" podCreationTimestamp="2025-12-06 03:30:36 +0000 UTC" firstStartedPulling="2025-12-06 03:30:37.831391214 +0000 UTC m=+1490.953998786" lastFinishedPulling="2025-12-06 03:30:41.005953532 +0000 UTC m=+1494.128561114" observedRunningTime="2025-12-06 03:30:41.897409804 +0000 UTC m=+1495.020017396" watchObservedRunningTime="2025-12-06 03:30:41.902716928 +0000 UTC m=+1495.025324500" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.284918 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.393235 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-combined-ca-bundle\") pod \"bfb686be-6bac-49fa-a164-543b9c1d7952\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.393624 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9w7p\" (UniqueName: \"kubernetes.io/projected/bfb686be-6bac-49fa-a164-543b9c1d7952-kube-api-access-q9w7p\") pod \"bfb686be-6bac-49fa-a164-543b9c1d7952\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.393718 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-config-data\") pod \"bfb686be-6bac-49fa-a164-543b9c1d7952\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.393784 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-scripts\") pod \"bfb686be-6bac-49fa-a164-543b9c1d7952\" (UID: \"bfb686be-6bac-49fa-a164-543b9c1d7952\") " Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.400994 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-scripts" (OuterVolumeSpecName: "scripts") pod "bfb686be-6bac-49fa-a164-543b9c1d7952" (UID: "bfb686be-6bac-49fa-a164-543b9c1d7952"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.411733 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb686be-6bac-49fa-a164-543b9c1d7952-kube-api-access-q9w7p" (OuterVolumeSpecName: "kube-api-access-q9w7p") pod "bfb686be-6bac-49fa-a164-543b9c1d7952" (UID: "bfb686be-6bac-49fa-a164-543b9c1d7952"). InnerVolumeSpecName "kube-api-access-q9w7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.427616 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfb686be-6bac-49fa-a164-543b9c1d7952" (UID: "bfb686be-6bac-49fa-a164-543b9c1d7952"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.429620 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-config-data" (OuterVolumeSpecName: "config-data") pod "bfb686be-6bac-49fa-a164-543b9c1d7952" (UID: "bfb686be-6bac-49fa-a164-543b9c1d7952"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.496278 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.496316 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.496325 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb686be-6bac-49fa-a164-543b9c1d7952-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.496338 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9w7p\" (UniqueName: \"kubernetes.io/projected/bfb686be-6bac-49fa-a164-543b9c1d7952-kube-api-access-q9w7p\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.909296 4801 generic.go:334] "Generic (PLEG): container finished" podID="064e28c8-c61c-4012-8e99-c5996a34ff9d" containerID="5d4dbeae13a04a600dd8399caf51523306daeeab0298ecadb5af907c42c87ff8" exitCode=0 Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.909725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v9x44" event={"ID":"064e28c8-c61c-4012-8e99-c5996a34ff9d","Type":"ContainerDied","Data":"5d4dbeae13a04a600dd8399caf51523306daeeab0298ecadb5af907c42c87ff8"} Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.927971 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-tpqcc" event={"ID":"bfb686be-6bac-49fa-a164-543b9c1d7952","Type":"ContainerDied","Data":"900ea18652c78c774984742fef0a468a95cce92f5f42b76609ff0366f12049e7"} Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.928023 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900ea18652c78c774984742fef0a468a95cce92f5f42b76609ff0366f12049e7" Dec 06 03:30:42 crc kubenswrapper[4801]: I1206 03:30:42.928080 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-tpqcc" Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.074434 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.074796 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-api" containerID="cri-o://23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894" gracePeriod=30 Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.074708 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-log" containerID="cri-o://87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22" gracePeriod=30 Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.084562 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.945711 4801 generic.go:334] "Generic (PLEG): container finished" podID="e5baabf2-3764-4291-91f5-088aa7aae099" containerID="87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22" exitCode=143 Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.946044 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5baabf2-3764-4291-91f5-088aa7aae099","Type":"ContainerDied","Data":"87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22"} Dec 06 03:30:43 crc kubenswrapper[4801]: I1206 03:30:43.946205 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e4314805-d80b-4ba2-8033-2c49ae745009" containerName="nova-scheduler-scheduler" containerID="cri-o://d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953" gracePeriod=30 Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.315351 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.432763 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-combined-ca-bundle\") pod \"064e28c8-c61c-4012-8e99-c5996a34ff9d\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.432838 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6jjt\" (UniqueName: \"kubernetes.io/projected/064e28c8-c61c-4012-8e99-c5996a34ff9d-kube-api-access-f6jjt\") pod \"064e28c8-c61c-4012-8e99-c5996a34ff9d\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.432920 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-scripts\") pod \"064e28c8-c61c-4012-8e99-c5996a34ff9d\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.433053 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-config-data\") pod \"064e28c8-c61c-4012-8e99-c5996a34ff9d\" (UID: \"064e28c8-c61c-4012-8e99-c5996a34ff9d\") " Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.438926 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-scripts" (OuterVolumeSpecName: "scripts") pod "064e28c8-c61c-4012-8e99-c5996a34ff9d" (UID: "064e28c8-c61c-4012-8e99-c5996a34ff9d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.453534 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064e28c8-c61c-4012-8e99-c5996a34ff9d-kube-api-access-f6jjt" (OuterVolumeSpecName: "kube-api-access-f6jjt") pod "064e28c8-c61c-4012-8e99-c5996a34ff9d" (UID: "064e28c8-c61c-4012-8e99-c5996a34ff9d"). InnerVolumeSpecName "kube-api-access-f6jjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.460452 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-config-data" (OuterVolumeSpecName: "config-data") pod "064e28c8-c61c-4012-8e99-c5996a34ff9d" (UID: "064e28c8-c61c-4012-8e99-c5996a34ff9d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.460958 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "064e28c8-c61c-4012-8e99-c5996a34ff9d" (UID: "064e28c8-c61c-4012-8e99-c5996a34ff9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.534616 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.534919 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.534932 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/064e28c8-c61c-4012-8e99-c5996a34ff9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.534946 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6jjt\" (UniqueName: \"kubernetes.io/projected/064e28c8-c61c-4012-8e99-c5996a34ff9d-kube-api-access-f6jjt\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.958459 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-v9x44" event={"ID":"064e28c8-c61c-4012-8e99-c5996a34ff9d","Type":"ContainerDied","Data":"84b282b800559b3a17f41daa6a94875a9ccead41048ab3b80058c5468c7a5fa5"} Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.958519 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84b282b800559b3a17f41daa6a94875a9ccead41048ab3b80058c5468c7a5fa5" Dec 06 03:30:44 crc kubenswrapper[4801]: I1206 03:30:44.958568 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-v9x44" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.023395 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 03:30:45 crc kubenswrapper[4801]: E1206 03:30:45.023807 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb686be-6bac-49fa-a164-543b9c1d7952" containerName="nova-manage" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.023823 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb686be-6bac-49fa-a164-543b9c1d7952" containerName="nova-manage" Dec 06 03:30:45 crc kubenswrapper[4801]: E1206 03:30:45.023860 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064e28c8-c61c-4012-8e99-c5996a34ff9d" containerName="nova-cell1-conductor-db-sync" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.023867 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="064e28c8-c61c-4012-8e99-c5996a34ff9d" containerName="nova-cell1-conductor-db-sync" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.024019 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="064e28c8-c61c-4012-8e99-c5996a34ff9d" containerName="nova-cell1-conductor-db-sync" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.024041 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb686be-6bac-49fa-a164-543b9c1d7952" containerName="nova-manage" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.024608 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.027485 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.031045 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.145047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.145112 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cgt\" (UniqueName: \"kubernetes.io/projected/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-kube-api-access-47cgt\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.145176 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.247903 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.247979 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cgt\" (UniqueName: \"kubernetes.io/projected/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-kube-api-access-47cgt\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.248013 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: E1206 03:30:45.249515 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 03:30:45 crc kubenswrapper[4801]: E1206 03:30:45.251118 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.253431 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.254828 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: E1206 03:30:45.256189 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 03:30:45 crc kubenswrapper[4801]: E1206 03:30:45.256249 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e4314805-d80b-4ba2-8033-2c49ae745009" containerName="nova-scheduler-scheduler" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.268219 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cgt\" (UniqueName: \"kubernetes.io/projected/f9207b6a-e3f4-4613-97eb-7c4022ca8fa0-kube-api-access-47cgt\") pod \"nova-cell1-conductor-0\" (UID: \"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0\") " pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.339937 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.453834 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.530592 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-4q9fn"] Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.530852 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" podUID="1279875a-a29e-48df-9631-e248326cecfa" containerName="dnsmasq-dns" containerID="cri-o://8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843" gracePeriod=10 Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.844926 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.966372 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.976342 4801 generic.go:334] "Generic (PLEG): container finished" podID="1279875a-a29e-48df-9631-e248326cecfa" containerID="8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843" exitCode=0 Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.976402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" event={"ID":"1279875a-a29e-48df-9631-e248326cecfa","Type":"ContainerDied","Data":"8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843"} Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.976429 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" event={"ID":"1279875a-a29e-48df-9631-e248326cecfa","Type":"ContainerDied","Data":"28c0c4f72e140aa7dc5b502a0265ba05bd0b25642c63928a73b1b1cdfa78ec17"} Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.976446 4801 scope.go:117] "RemoveContainer" containerID="8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.976596 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-4q9fn" Dec 06 03:30:45 crc kubenswrapper[4801]: I1206 03:30:45.978940 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0","Type":"ContainerStarted","Data":"51cd377794d7c922a535c1bbca0b8c6ca4d77e9b1861a2fe31cd08a4775dfcba"} Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.020041 4801 scope.go:117] "RemoveContainer" containerID="de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.044345 4801 scope.go:117] "RemoveContainer" containerID="8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843" Dec 06 03:30:46 crc kubenswrapper[4801]: E1206 03:30:46.044844 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843\": container with ID starting with 8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843 not found: ID does not exist" containerID="8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.044875 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843"} err="failed to get container status \"8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843\": rpc error: code = NotFound desc = could not find container \"8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843\": container with ID starting with 8fe66cce9d8db3967a4050c45c15d3d09347b4c790f6ee61a81d957275bc6843 not found: ID does not exist" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.044911 4801 scope.go:117] "RemoveContainer" containerID="de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6" Dec 06 03:30:46 crc kubenswrapper[4801]: E1206 03:30:46.045327 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6\": container with ID starting with de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6 not found: ID does not exist" containerID="de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.045377 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6"} err="failed to get container status \"de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6\": rpc error: code = NotFound desc = could not find container \"de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6\": container with ID starting with de6f89bcf144cc814719c2a224973956e7b9ce72d3f0e065f841b24f4f86ecb6 not found: ID does not exist" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.064704 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-sb\") pod \"1279875a-a29e-48df-9631-e248326cecfa\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.064800 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-nb\") pod \"1279875a-a29e-48df-9631-e248326cecfa\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.064868 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mzxl\" (UniqueName: \"kubernetes.io/projected/1279875a-a29e-48df-9631-e248326cecfa-kube-api-access-4mzxl\") pod \"1279875a-a29e-48df-9631-e248326cecfa\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.064902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-config\") pod \"1279875a-a29e-48df-9631-e248326cecfa\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.064928 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-dns-svc\") pod \"1279875a-a29e-48df-9631-e248326cecfa\" (UID: \"1279875a-a29e-48df-9631-e248326cecfa\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.069075 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1279875a-a29e-48df-9631-e248326cecfa-kube-api-access-4mzxl" (OuterVolumeSpecName: "kube-api-access-4mzxl") pod "1279875a-a29e-48df-9631-e248326cecfa" (UID: "1279875a-a29e-48df-9631-e248326cecfa"). InnerVolumeSpecName "kube-api-access-4mzxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.114646 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1279875a-a29e-48df-9631-e248326cecfa" (UID: "1279875a-a29e-48df-9631-e248326cecfa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.118523 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1279875a-a29e-48df-9631-e248326cecfa" (UID: "1279875a-a29e-48df-9631-e248326cecfa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.121608 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-config" (OuterVolumeSpecName: "config") pod "1279875a-a29e-48df-9631-e248326cecfa" (UID: "1279875a-a29e-48df-9631-e248326cecfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.124935 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1279875a-a29e-48df-9631-e248326cecfa" (UID: "1279875a-a29e-48df-9631-e248326cecfa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.166813 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mzxl\" (UniqueName: \"kubernetes.io/projected/1279875a-a29e-48df-9631-e248326cecfa-kube-api-access-4mzxl\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.166847 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.166857 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.166868 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.166878 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1279875a-a29e-48df-9631-e248326cecfa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.400929 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-4q9fn"] Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.414958 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-4q9fn"] Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.653005 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.780108 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-config-data\") pod \"e5baabf2-3764-4291-91f5-088aa7aae099\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.780256 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5baabf2-3764-4291-91f5-088aa7aae099-logs\") pod \"e5baabf2-3764-4291-91f5-088aa7aae099\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.780319 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w49gl\" (UniqueName: \"kubernetes.io/projected/e5baabf2-3764-4291-91f5-088aa7aae099-kube-api-access-w49gl\") pod \"e5baabf2-3764-4291-91f5-088aa7aae099\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.780392 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-combined-ca-bundle\") pod \"e5baabf2-3764-4291-91f5-088aa7aae099\" (UID: \"e5baabf2-3764-4291-91f5-088aa7aae099\") " Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.781705 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5baabf2-3764-4291-91f5-088aa7aae099-logs" (OuterVolumeSpecName: "logs") pod "e5baabf2-3764-4291-91f5-088aa7aae099" (UID: "e5baabf2-3764-4291-91f5-088aa7aae099"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.785594 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5baabf2-3764-4291-91f5-088aa7aae099-kube-api-access-w49gl" (OuterVolumeSpecName: "kube-api-access-w49gl") pod "e5baabf2-3764-4291-91f5-088aa7aae099" (UID: "e5baabf2-3764-4291-91f5-088aa7aae099"). InnerVolumeSpecName "kube-api-access-w49gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.811731 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5baabf2-3764-4291-91f5-088aa7aae099" (UID: "e5baabf2-3764-4291-91f5-088aa7aae099"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.814165 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-config-data" (OuterVolumeSpecName: "config-data") pod "e5baabf2-3764-4291-91f5-088aa7aae099" (UID: "e5baabf2-3764-4291-91f5-088aa7aae099"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.882350 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.882394 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5baabf2-3764-4291-91f5-088aa7aae099-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.882404 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w49gl\" (UniqueName: \"kubernetes.io/projected/e5baabf2-3764-4291-91f5-088aa7aae099-kube-api-access-w49gl\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.882415 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5baabf2-3764-4291-91f5-088aa7aae099-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.993140 4801 generic.go:334] "Generic (PLEG): container finished" podID="e5baabf2-3764-4291-91f5-088aa7aae099" containerID="23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894" exitCode=0 Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.993533 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:30:46 crc kubenswrapper[4801]: I1206 03:30:46.999874 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5baabf2-3764-4291-91f5-088aa7aae099","Type":"ContainerDied","Data":"23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894"} Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:46.999930 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e5baabf2-3764-4291-91f5-088aa7aae099","Type":"ContainerDied","Data":"9735f089aaae2c322828b3af84720d835ec4cd1d93a905986d6d70e75b5d3e2a"} Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:46.999947 4801 scope.go:117] "RemoveContainer" containerID="23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.004381 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f9207b6a-e3f4-4613-97eb-7c4022ca8fa0","Type":"ContainerStarted","Data":"e88791ce93d6c07c5e3618213c8a4f9795699e254f3813129c1c8a31867c96eb"} Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.004533 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.035177 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.035148624 podStartE2EDuration="2.035148624s" podCreationTimestamp="2025-12-06 03:30:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:47.027405742 +0000 UTC m=+1500.150013314" watchObservedRunningTime="2025-12-06 03:30:47.035148624 +0000 UTC m=+1500.157756226" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.037254 4801 scope.go:117] "RemoveContainer" containerID="87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.056209 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.071048 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.079460 4801 scope.go:117] "RemoveContainer" containerID="23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894" Dec 06 03:30:47 crc kubenswrapper[4801]: E1206 03:30:47.081249 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894\": container with ID starting with 23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894 not found: ID does not exist" containerID="23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.081286 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894"} err="failed to get container status \"23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894\": rpc error: code = NotFound desc = could not find container \"23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894\": container with ID starting with 23f6bfe81a1d0fda1c8aefdc475612980eaef2489c32bc000f4a20859602c894 not found: ID does not exist" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.081312 4801 scope.go:117] "RemoveContainer" containerID="87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22" Dec 06 03:30:47 crc kubenswrapper[4801]: E1206 03:30:47.081577 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22\": container with ID starting with 87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22 not found: ID does not exist" containerID="87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.081607 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22"} err="failed to get container status \"87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22\": rpc error: code = NotFound desc = could not find container \"87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22\": container with ID starting with 87910f81210294277ddc656af506c6dee85d126671f3e1d2d87554225b69ac22 not found: ID does not exist" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.087623 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:47 crc kubenswrapper[4801]: E1206 03:30:47.088032 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1279875a-a29e-48df-9631-e248326cecfa" containerName="dnsmasq-dns" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088053 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1279875a-a29e-48df-9631-e248326cecfa" containerName="dnsmasq-dns" Dec 06 03:30:47 crc kubenswrapper[4801]: E1206 03:30:47.088067 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-log" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088076 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-log" Dec 06 03:30:47 crc kubenswrapper[4801]: E1206 03:30:47.088098 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1279875a-a29e-48df-9631-e248326cecfa" containerName="init" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088105 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1279875a-a29e-48df-9631-e248326cecfa" containerName="init" Dec 06 03:30:47 crc kubenswrapper[4801]: E1206 03:30:47.088132 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-api" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088139 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-api" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088341 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1279875a-a29e-48df-9631-e248326cecfa" containerName="dnsmasq-dns" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088363 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-api" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.088377 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" containerName="nova-api-log" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.089551 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.101091 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.113326 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.187523 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.187586 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61a679a7-1f1d-450c-9e3c-4950851a3bf3-logs\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.187636 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-config-data\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.187722 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtk6\" (UniqueName: \"kubernetes.io/projected/61a679a7-1f1d-450c-9e3c-4950851a3bf3-kube-api-access-smtk6\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.222705 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1279875a-a29e-48df-9631-e248326cecfa" path="/var/lib/kubelet/pods/1279875a-a29e-48df-9631-e248326cecfa/volumes" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.223399 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5baabf2-3764-4291-91f5-088aa7aae099" path="/var/lib/kubelet/pods/e5baabf2-3764-4291-91f5-088aa7aae099/volumes" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.291018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61a679a7-1f1d-450c-9e3c-4950851a3bf3-logs\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.291184 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-config-data\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.291266 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtk6\" (UniqueName: \"kubernetes.io/projected/61a679a7-1f1d-450c-9e3c-4950851a3bf3-kube-api-access-smtk6\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.291469 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.292548 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61a679a7-1f1d-450c-9e3c-4950851a3bf3-logs\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.295622 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.299459 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.312225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-config-data\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.322160 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtk6\" (UniqueName: \"kubernetes.io/projected/61a679a7-1f1d-450c-9e3c-4950851a3bf3-kube-api-access-smtk6\") pod \"nova-api-0\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.413102 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:30:47 crc kubenswrapper[4801]: I1206 03:30:47.822595 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:30:48 crc kubenswrapper[4801]: I1206 03:30:48.016147 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61a679a7-1f1d-450c-9e3c-4950851a3bf3","Type":"ContainerStarted","Data":"bb890ef25420824dbf349a5b63da4d1e48ab390f25248462031f084dbdd3e4dd"} Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.024697 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61a679a7-1f1d-450c-9e3c-4950851a3bf3","Type":"ContainerStarted","Data":"92135db5e8647758050ab1e1eccaaefae7a53b6eefef4a18b49c06928433d1ab"} Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.025046 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61a679a7-1f1d-450c-9e3c-4950851a3bf3","Type":"ContainerStarted","Data":"dc021707c138c1f320430680148bf61e8cf56acd639ed8aa140ddcd897afc30e"} Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.028277 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4314805-d80b-4ba2-8033-2c49ae745009","Type":"ContainerDied","Data":"d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953"} Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.028438 4801 generic.go:334] "Generic (PLEG): container finished" podID="e4314805-d80b-4ba2-8033-2c49ae745009" containerID="d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953" exitCode=0 Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.046978 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.046962326 podStartE2EDuration="2.046962326s" podCreationTimestamp="2025-12-06 03:30:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:49.042636347 +0000 UTC m=+1502.165243919" watchObservedRunningTime="2025-12-06 03:30:49.046962326 +0000 UTC m=+1502.169569898" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.169981 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.331910 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-combined-ca-bundle\") pod \"e4314805-d80b-4ba2-8033-2c49ae745009\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.332172 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-config-data\") pod \"e4314805-d80b-4ba2-8033-2c49ae745009\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.332496 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55qj4\" (UniqueName: \"kubernetes.io/projected/e4314805-d80b-4ba2-8033-2c49ae745009-kube-api-access-55qj4\") pod \"e4314805-d80b-4ba2-8033-2c49ae745009\" (UID: \"e4314805-d80b-4ba2-8033-2c49ae745009\") " Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.339284 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4314805-d80b-4ba2-8033-2c49ae745009-kube-api-access-55qj4" (OuterVolumeSpecName: "kube-api-access-55qj4") pod "e4314805-d80b-4ba2-8033-2c49ae745009" (UID: "e4314805-d80b-4ba2-8033-2c49ae745009"). InnerVolumeSpecName "kube-api-access-55qj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.360969 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-config-data" (OuterVolumeSpecName: "config-data") pod "e4314805-d80b-4ba2-8033-2c49ae745009" (UID: "e4314805-d80b-4ba2-8033-2c49ae745009"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.377913 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4314805-d80b-4ba2-8033-2c49ae745009" (UID: "e4314805-d80b-4ba2-8033-2c49ae745009"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.435528 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55qj4\" (UniqueName: \"kubernetes.io/projected/e4314805-d80b-4ba2-8033-2c49ae745009-kube-api-access-55qj4\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.436034 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:49 crc kubenswrapper[4801]: I1206 03:30:49.436051 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4314805-d80b-4ba2-8033-2c49ae745009-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.038269 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.038270 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4314805-d80b-4ba2-8033-2c49ae745009","Type":"ContainerDied","Data":"bf102432a2b31321cc92ea63d40cb506e30a1590d68f2ee1c9f137852d56f514"} Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.038334 4801 scope.go:117] "RemoveContainer" containerID="d790edae2ffe374d3619db30be034dfb9d1ed6e2e78cb47feda39def89433953" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.080515 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.089532 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.105360 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:50 crc kubenswrapper[4801]: E1206 03:30:50.105850 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4314805-d80b-4ba2-8033-2c49ae745009" containerName="nova-scheduler-scheduler" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.105897 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4314805-d80b-4ba2-8033-2c49ae745009" containerName="nova-scheduler-scheduler" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.106144 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4314805-d80b-4ba2-8033-2c49ae745009" containerName="nova-scheduler-scheduler" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.106815 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.111293 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.124367 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.249213 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrms\" (UniqueName: \"kubernetes.io/projected/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-kube-api-access-6vrms\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.249324 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-config-data\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.249647 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.351037 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-config-data\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.351211 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.351284 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrms\" (UniqueName: \"kubernetes.io/projected/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-kube-api-access-6vrms\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.356591 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-config-data\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.358427 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.371873 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrms\" (UniqueName: \"kubernetes.io/projected/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-kube-api-access-6vrms\") pod \"nova-scheduler-0\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.432636 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:30:50 crc kubenswrapper[4801]: I1206 03:30:50.890408 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:30:51 crc kubenswrapper[4801]: I1206 03:30:51.047792 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ddc46c-cdb8-400e-8308-dfd2ce38dee4","Type":"ContainerStarted","Data":"7dcd8a73e20c4731cf936a7b77fb23b51fcc32e3dc8289ea6e70684132cb5260"} Dec 06 03:30:51 crc kubenswrapper[4801]: I1206 03:30:51.226963 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4314805-d80b-4ba2-8033-2c49ae745009" path="/var/lib/kubelet/pods/e4314805-d80b-4ba2-8033-2c49ae745009/volumes" Dec 06 03:30:52 crc kubenswrapper[4801]: I1206 03:30:52.057929 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ddc46c-cdb8-400e-8308-dfd2ce38dee4","Type":"ContainerStarted","Data":"f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98"} Dec 06 03:30:52 crc kubenswrapper[4801]: I1206 03:30:52.082306 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.082285854 podStartE2EDuration="2.082285854s" podCreationTimestamp="2025-12-06 03:30:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:30:52.072936038 +0000 UTC m=+1505.195543610" watchObservedRunningTime="2025-12-06 03:30:52.082285854 +0000 UTC m=+1505.204893426" Dec 06 03:30:55 crc kubenswrapper[4801]: I1206 03:30:55.370855 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 03:30:55 crc kubenswrapper[4801]: I1206 03:30:55.432783 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 03:30:57 crc kubenswrapper[4801]: I1206 03:30:57.413453 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:30:57 crc kubenswrapper[4801]: I1206 03:30:57.413706 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:30:58 crc kubenswrapper[4801]: I1206 03:30:58.496022 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:30:58 crc kubenswrapper[4801]: I1206 03:30:58.496563 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.177:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:00 crc kubenswrapper[4801]: I1206 03:31:00.433131 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 03:31:00 crc kubenswrapper[4801]: I1206 03:31:00.459204 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 03:31:01 crc kubenswrapper[4801]: I1206 03:31:01.163664 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 03:31:07 crc kubenswrapper[4801]: E1206 03:31:07.154037 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf001cd4d_b7de_410b_af82_1e38fe590a21.slice/crio-conmon-5f2c158212515aff75d727b2848dede2a728d0631c22dd2169b35ee9f21a6e58.scope\": RecentStats: unable to find data in memory cache]" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.186647 4801 generic.go:334] "Generic (PLEG): container finished" podID="f001cd4d-b7de-410b-af82-1e38fe590a21" containerID="5f2c158212515aff75d727b2848dede2a728d0631c22dd2169b35ee9f21a6e58" exitCode=137 Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.186710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f001cd4d-b7de-410b-af82-1e38fe590a21","Type":"ContainerDied","Data":"5f2c158212515aff75d727b2848dede2a728d0631c22dd2169b35ee9f21a6e58"} Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.188357 4801 generic.go:334] "Generic (PLEG): container finished" podID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerID="f41dd903e2c2fb6869c870ddc1c6160a8dff90a46ab3375073ccbc1d37cdfa72" exitCode=137 Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.188383 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a8b52cf-722d-47ee-9942-f28c95eb337d","Type":"ContainerDied","Data":"f41dd903e2c2fb6869c870ddc1c6160a8dff90a46ab3375073ccbc1d37cdfa72"} Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.353387 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.379092 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.387116 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.424362 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.424813 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.427386 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.433800 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.482014 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-combined-ca-bundle\") pod \"0a8b52cf-722d-47ee-9942-f28c95eb337d\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.482089 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv9hx\" (UniqueName: \"kubernetes.io/projected/0a8b52cf-722d-47ee-9942-f28c95eb337d-kube-api-access-nv9hx\") pod \"0a8b52cf-722d-47ee-9942-f28c95eb337d\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.482131 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-config-data\") pod \"0a8b52cf-722d-47ee-9942-f28c95eb337d\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.482188 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-config-data\") pod \"f001cd4d-b7de-410b-af82-1e38fe590a21\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.482224 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4z4r\" (UniqueName: \"kubernetes.io/projected/f001cd4d-b7de-410b-af82-1e38fe590a21-kube-api-access-d4z4r\") pod \"f001cd4d-b7de-410b-af82-1e38fe590a21\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.483052 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-combined-ca-bundle\") pod \"f001cd4d-b7de-410b-af82-1e38fe590a21\" (UID: \"f001cd4d-b7de-410b-af82-1e38fe590a21\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.483163 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8b52cf-722d-47ee-9942-f28c95eb337d-logs\") pod \"0a8b52cf-722d-47ee-9942-f28c95eb337d\" (UID: \"0a8b52cf-722d-47ee-9942-f28c95eb337d\") " Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.483901 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a8b52cf-722d-47ee-9942-f28c95eb337d-logs" (OuterVolumeSpecName: "logs") pod "0a8b52cf-722d-47ee-9942-f28c95eb337d" (UID: "0a8b52cf-722d-47ee-9942-f28c95eb337d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.488097 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f001cd4d-b7de-410b-af82-1e38fe590a21-kube-api-access-d4z4r" (OuterVolumeSpecName: "kube-api-access-d4z4r") pod "f001cd4d-b7de-410b-af82-1e38fe590a21" (UID: "f001cd4d-b7de-410b-af82-1e38fe590a21"). InnerVolumeSpecName "kube-api-access-d4z4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.493141 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a8b52cf-722d-47ee-9942-f28c95eb337d-kube-api-access-nv9hx" (OuterVolumeSpecName: "kube-api-access-nv9hx") pod "0a8b52cf-722d-47ee-9942-f28c95eb337d" (UID: "0a8b52cf-722d-47ee-9942-f28c95eb337d"). InnerVolumeSpecName "kube-api-access-nv9hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.515208 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a8b52cf-722d-47ee-9942-f28c95eb337d" (UID: "0a8b52cf-722d-47ee-9942-f28c95eb337d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.516192 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f001cd4d-b7de-410b-af82-1e38fe590a21" (UID: "f001cd4d-b7de-410b-af82-1e38fe590a21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.516931 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-config-data" (OuterVolumeSpecName: "config-data") pod "0a8b52cf-722d-47ee-9942-f28c95eb337d" (UID: "0a8b52cf-722d-47ee-9942-f28c95eb337d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.517278 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-config-data" (OuterVolumeSpecName: "config-data") pod "f001cd4d-b7de-410b-af82-1e38fe590a21" (UID: "f001cd4d-b7de-410b-af82-1e38fe590a21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586546 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a8b52cf-722d-47ee-9942-f28c95eb337d-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586571 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586584 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv9hx\" (UniqueName: \"kubernetes.io/projected/0a8b52cf-722d-47ee-9942-f28c95eb337d-kube-api-access-nv9hx\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586593 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a8b52cf-722d-47ee-9942-f28c95eb337d-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586603 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586611 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4z4r\" (UniqueName: \"kubernetes.io/projected/f001cd4d-b7de-410b-af82-1e38fe590a21-kube-api-access-d4z4r\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:07 crc kubenswrapper[4801]: I1206 03:31:07.586621 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f001cd4d-b7de-410b-af82-1e38fe590a21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.198296 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a8b52cf-722d-47ee-9942-f28c95eb337d","Type":"ContainerDied","Data":"83e5b57303c7f93f7543dab0ad1a5ed685f6971b7df5ed64e89f8e89d1848cbf"} Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.198345 4801 scope.go:117] "RemoveContainer" containerID="f41dd903e2c2fb6869c870ddc1c6160a8dff90a46ab3375073ccbc1d37cdfa72" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.199644 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.201543 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f001cd4d-b7de-410b-af82-1e38fe590a21","Type":"ContainerDied","Data":"5b4837478032d6f228213492fc29c716efb6e8cb27c990cd1e4941a4c5cb822e"} Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.202033 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.202067 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.208225 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.226084 4801 scope.go:117] "RemoveContainer" containerID="ea708b073a20b942c85ffca1ca28cf29c1b4e27f384fa4a16e0ff87dbbcdb399" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.264027 4801 scope.go:117] "RemoveContainer" containerID="5f2c158212515aff75d727b2848dede2a728d0631c22dd2169b35ee9f21a6e58" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.279938 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.352589 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.372939 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.402505 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.418856 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: E1206 03:31:08.419311 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-metadata" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.419332 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-metadata" Dec 06 03:31:08 crc kubenswrapper[4801]: E1206 03:31:08.419351 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f001cd4d-b7de-410b-af82-1e38fe590a21" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.419357 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f001cd4d-b7de-410b-af82-1e38fe590a21" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 03:31:08 crc kubenswrapper[4801]: E1206 03:31:08.419369 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-log" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.419410 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-log" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.419584 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-metadata" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.419601 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f001cd4d-b7de-410b-af82-1e38fe590a21" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.419612 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" containerName="nova-metadata-log" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.420651 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.425518 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.431459 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.431682 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.432846 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.436942 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.437236 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.437442 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.442160 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.452520 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.480384 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-57pwn"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.482550 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.512980 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.513322 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.513480 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-logs\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.513734 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rf9r\" (UniqueName: \"kubernetes.io/projected/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-kube-api-access-2rf9r\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.513945 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-config-data\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.517311 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-57pwn"] Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.616954 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-config\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617020 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869ms\" (UniqueName: \"kubernetes.io/projected/5ba81752-9263-4679-9908-c8f6eecd163d-kube-api-access-869ms\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617048 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6xrz\" (UniqueName: \"kubernetes.io/projected/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-kube-api-access-x6xrz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617079 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617113 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rf9r\" (UniqueName: \"kubernetes.io/projected/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-kube-api-access-2rf9r\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617150 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617217 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-config-data\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617248 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617322 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617399 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617428 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617447 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-dns-svc\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617478 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-logs\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.617504 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.629231 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-logs\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.639408 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.640161 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-config-data\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.655430 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rf9r\" (UniqueName: \"kubernetes.io/projected/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-kube-api-access-2rf9r\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.656501 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.719373 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-config\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.719735 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6xrz\" (UniqueName: \"kubernetes.io/projected/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-kube-api-access-x6xrz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.720167 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869ms\" (UniqueName: \"kubernetes.io/projected/5ba81752-9263-4679-9908-c8f6eecd163d-kube-api-access-869ms\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.720439 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.720474 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-config\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.720678 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.720824 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.721262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.721379 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.721524 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.721642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-dns-svc\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.721730 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.722409 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.724123 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-dns-svc\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.724641 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.728591 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.729032 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.729381 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.738126 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869ms\" (UniqueName: \"kubernetes.io/projected/5ba81752-9263-4679-9908-c8f6eecd163d-kube-api-access-869ms\") pod \"dnsmasq-dns-5b856c5697-57pwn\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.738720 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6xrz\" (UniqueName: \"kubernetes.io/projected/8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25-kube-api-access-x6xrz\") pod \"nova-cell1-novncproxy-0\" (UID: \"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.752314 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.758672 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:08 crc kubenswrapper[4801]: I1206 03:31:08.809173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:09 crc kubenswrapper[4801]: I1206 03:31:09.223247 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a8b52cf-722d-47ee-9942-f28c95eb337d" path="/var/lib/kubelet/pods/0a8b52cf-722d-47ee-9942-f28c95eb337d/volumes" Dec 06 03:31:09 crc kubenswrapper[4801]: I1206 03:31:09.224259 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f001cd4d-b7de-410b-af82-1e38fe590a21" path="/var/lib/kubelet/pods/f001cd4d-b7de-410b-af82-1e38fe590a21/volumes" Dec 06 03:31:09 crc kubenswrapper[4801]: I1206 03:31:09.253691 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 03:31:09 crc kubenswrapper[4801]: W1206 03:31:09.263816 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bdab0ac_ed8c_443a_aec4_31e4fcc2fc25.slice/crio-03f7fdf5d54d1601da2513e16f1af2af4d9fa719c7895bee78fae2cbe3f01cd8 WatchSource:0}: Error finding container 03f7fdf5d54d1601da2513e16f1af2af4d9fa719c7895bee78fae2cbe3f01cd8: Status 404 returned error can't find the container with id 03f7fdf5d54d1601da2513e16f1af2af4d9fa719c7895bee78fae2cbe3f01cd8 Dec 06 03:31:09 crc kubenswrapper[4801]: I1206 03:31:09.315601 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:09 crc kubenswrapper[4801]: W1206 03:31:09.321573 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ef16fc6_473c_4c44_83b5_21a6fdd0a93c.slice/crio-56e98d974adffaa211ef3a90ce5081a2c6f60f2dc801286c8cd3ca428717a73b WatchSource:0}: Error finding container 56e98d974adffaa211ef3a90ce5081a2c6f60f2dc801286c8cd3ca428717a73b: Status 404 returned error can't find the container with id 56e98d974adffaa211ef3a90ce5081a2c6f60f2dc801286c8cd3ca428717a73b Dec 06 03:31:09 crc kubenswrapper[4801]: I1206 03:31:09.426905 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-57pwn"] Dec 06 03:31:09 crc kubenswrapper[4801]: W1206 03:31:09.446193 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ba81752_9263_4679_9908_c8f6eecd163d.slice/crio-aef700c21f2e56db8f48c5ce0f572494fe92b9b5c0f40f585cab1a5d671c6ac2 WatchSource:0}: Error finding container aef700c21f2e56db8f48c5ce0f572494fe92b9b5c0f40f585cab1a5d671c6ac2: Status 404 returned error can't find the container with id aef700c21f2e56db8f48c5ce0f572494fe92b9b5c0f40f585cab1a5d671c6ac2 Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.219614 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25","Type":"ContainerStarted","Data":"a3ff874627e88f07691233c6e5024e04742f84a598f312ac61c8179f01ed5dbc"} Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.219929 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25","Type":"ContainerStarted","Data":"03f7fdf5d54d1601da2513e16f1af2af4d9fa719c7895bee78fae2cbe3f01cd8"} Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.221726 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c","Type":"ContainerStarted","Data":"ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc"} Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.221767 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c","Type":"ContainerStarted","Data":"56e98d974adffaa211ef3a90ce5081a2c6f60f2dc801286c8cd3ca428717a73b"} Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.223273 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" event={"ID":"5ba81752-9263-4679-9908-c8f6eecd163d","Type":"ContainerStarted","Data":"171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db"} Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.223322 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" event={"ID":"5ba81752-9263-4679-9908-c8f6eecd163d","Type":"ContainerStarted","Data":"aef700c21f2e56db8f48c5ce0f572494fe92b9b5c0f40f585cab1a5d671c6ac2"} Dec 06 03:31:10 crc kubenswrapper[4801]: I1206 03:31:10.242856 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.242839179 podStartE2EDuration="2.242839179s" podCreationTimestamp="2025-12-06 03:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:10.239936259 +0000 UTC m=+1523.362543831" watchObservedRunningTime="2025-12-06 03:31:10.242839179 +0000 UTC m=+1523.365446751" Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.027105 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.254059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c","Type":"ContainerStarted","Data":"6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75"} Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.259626 4801 generic.go:334] "Generic (PLEG): container finished" podID="5ba81752-9263-4679-9908-c8f6eecd163d" containerID="171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db" exitCode=0 Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.259707 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" event={"ID":"5ba81752-9263-4679-9908-c8f6eecd163d","Type":"ContainerDied","Data":"171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db"} Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.260733 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-api" containerID="cri-o://92135db5e8647758050ab1e1eccaaefae7a53b6eefef4a18b49c06928433d1ab" gracePeriod=30 Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.260875 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-log" containerID="cri-o://dc021707c138c1f320430680148bf61e8cf56acd639ed8aa140ddcd897afc30e" gracePeriod=30 Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.299100 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.299081593 podStartE2EDuration="3.299081593s" podCreationTimestamp="2025-12-06 03:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:11.295914576 +0000 UTC m=+1524.418522148" watchObservedRunningTime="2025-12-06 03:31:11.299081593 +0000 UTC m=+1524.421689165" Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.674951 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.675517 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-central-agent" containerID="cri-o://e01fbe63de003ac5929a870ea1272614f2df929df74a1d10851c3463e550f579" gracePeriod=30 Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.675576 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="proxy-httpd" containerID="cri-o://3243113ab86709f8248c7efa7fce2630a3305dd299d3c3212df806c8e6aa3d76" gracePeriod=30 Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.675630 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="sg-core" containerID="cri-o://8f1835da72045a39f1a0d0b8b3239379b36aa3b46233bdaea047ae9baf3d9ace" gracePeriod=30 Dec 06 03:31:11 crc kubenswrapper[4801]: I1206 03:31:11.675668 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-notification-agent" containerID="cri-o://2ff70614331fa589ba3a7ca547f71af86ca8d306220838332527024261bef8cd" gracePeriod=30 Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.270619 4801 generic.go:334] "Generic (PLEG): container finished" podID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerID="dc021707c138c1f320430680148bf61e8cf56acd639ed8aa140ddcd897afc30e" exitCode=143 Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.270710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61a679a7-1f1d-450c-9e3c-4950851a3bf3","Type":"ContainerDied","Data":"dc021707c138c1f320430680148bf61e8cf56acd639ed8aa140ddcd897afc30e"} Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.273034 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" event={"ID":"5ba81752-9263-4679-9908-c8f6eecd163d","Type":"ContainerStarted","Data":"c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e"} Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.274102 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.276148 4801 generic.go:334] "Generic (PLEG): container finished" podID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerID="3243113ab86709f8248c7efa7fce2630a3305dd299d3c3212df806c8e6aa3d76" exitCode=0 Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.276171 4801 generic.go:334] "Generic (PLEG): container finished" podID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerID="8f1835da72045a39f1a0d0b8b3239379b36aa3b46233bdaea047ae9baf3d9ace" exitCode=2 Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.276696 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerDied","Data":"3243113ab86709f8248c7efa7fce2630a3305dd299d3c3212df806c8e6aa3d76"} Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.276722 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerDied","Data":"8f1835da72045a39f1a0d0b8b3239379b36aa3b46233bdaea047ae9baf3d9ace"} Dec 06 03:31:12 crc kubenswrapper[4801]: I1206 03:31:12.301842 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" podStartSLOduration=4.301827222 podStartE2EDuration="4.301827222s" podCreationTimestamp="2025-12-06 03:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:12.298246694 +0000 UTC m=+1525.420854256" watchObservedRunningTime="2025-12-06 03:31:12.301827222 +0000 UTC m=+1525.424434794" Dec 06 03:31:13 crc kubenswrapper[4801]: I1206 03:31:13.286690 4801 generic.go:334] "Generic (PLEG): container finished" podID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerID="e01fbe63de003ac5929a870ea1272614f2df929df74a1d10851c3463e550f579" exitCode=0 Dec 06 03:31:13 crc kubenswrapper[4801]: I1206 03:31:13.286819 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerDied","Data":"e01fbe63de003ac5929a870ea1272614f2df929df74a1d10851c3463e550f579"} Dec 06 03:31:13 crc kubenswrapper[4801]: I1206 03:31:13.752856 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 03:31:13 crc kubenswrapper[4801]: I1206 03:31:13.752916 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 03:31:13 crc kubenswrapper[4801]: I1206 03:31:13.759107 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:15 crc kubenswrapper[4801]: I1206 03:31:15.310326 4801 generic.go:334] "Generic (PLEG): container finished" podID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerID="92135db5e8647758050ab1e1eccaaefae7a53b6eefef4a18b49c06928433d1ab" exitCode=0 Dec 06 03:31:15 crc kubenswrapper[4801]: I1206 03:31:15.310398 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61a679a7-1f1d-450c-9e3c-4950851a3bf3","Type":"ContainerDied","Data":"92135db5e8647758050ab1e1eccaaefae7a53b6eefef4a18b49c06928433d1ab"} Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.222408 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.310123 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61a679a7-1f1d-450c-9e3c-4950851a3bf3-logs\") pod \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.310293 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smtk6\" (UniqueName: \"kubernetes.io/projected/61a679a7-1f1d-450c-9e3c-4950851a3bf3-kube-api-access-smtk6\") pod \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.310315 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-combined-ca-bundle\") pod \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.310361 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-config-data\") pod \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\" (UID: \"61a679a7-1f1d-450c-9e3c-4950851a3bf3\") " Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.311962 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a679a7-1f1d-450c-9e3c-4950851a3bf3-logs" (OuterVolumeSpecName: "logs") pod "61a679a7-1f1d-450c-9e3c-4950851a3bf3" (UID: "61a679a7-1f1d-450c-9e3c-4950851a3bf3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.334101 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a679a7-1f1d-450c-9e3c-4950851a3bf3-kube-api-access-smtk6" (OuterVolumeSpecName: "kube-api-access-smtk6") pod "61a679a7-1f1d-450c-9e3c-4950851a3bf3" (UID: "61a679a7-1f1d-450c-9e3c-4950851a3bf3"). InnerVolumeSpecName "kube-api-access-smtk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.366697 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"61a679a7-1f1d-450c-9e3c-4950851a3bf3","Type":"ContainerDied","Data":"bb890ef25420824dbf349a5b63da4d1e48ab390f25248462031f084dbdd3e4dd"} Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.366777 4801 scope.go:117] "RemoveContainer" containerID="92135db5e8647758050ab1e1eccaaefae7a53b6eefef4a18b49c06928433d1ab" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.366974 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.418900 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61a679a7-1f1d-450c-9e3c-4950851a3bf3-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.419181 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smtk6\" (UniqueName: \"kubernetes.io/projected/61a679a7-1f1d-450c-9e3c-4950851a3bf3-kube-api-access-smtk6\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.436144 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-config-data" (OuterVolumeSpecName: "config-data") pod "61a679a7-1f1d-450c-9e3c-4950851a3bf3" (UID: "61a679a7-1f1d-450c-9e3c-4950851a3bf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.459737 4801 scope.go:117] "RemoveContainer" containerID="dc021707c138c1f320430680148bf61e8cf56acd639ed8aa140ddcd897afc30e" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.483515 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61a679a7-1f1d-450c-9e3c-4950851a3bf3" (UID: "61a679a7-1f1d-450c-9e3c-4950851a3bf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.520578 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.520610 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61a679a7-1f1d-450c-9e3c-4950851a3bf3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.707687 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.726943 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.734114 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:16 crc kubenswrapper[4801]: E1206 03:31:16.734569 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-api" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.734614 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-api" Dec 06 03:31:16 crc kubenswrapper[4801]: E1206 03:31:16.734636 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-log" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.734642 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-log" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.734864 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-api" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.734885 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" containerName="nova-api-log" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.735840 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.738023 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.738324 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.738538 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.744311 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.829423 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-logs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.829647 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmfb\" (UniqueName: \"kubernetes.io/projected/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-kube-api-access-9rmfb\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.829727 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.829817 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.829846 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-config-data\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.829882 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-public-tls-certs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.931663 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-logs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.931989 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rmfb\" (UniqueName: \"kubernetes.io/projected/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-kube-api-access-9rmfb\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.932169 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-logs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.932455 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.932587 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-config-data\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.932732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.933142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-public-tls-certs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.937691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-config-data\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.938138 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.940653 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-public-tls-certs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.941976 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:16 crc kubenswrapper[4801]: I1206 03:31:16.951881 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rmfb\" (UniqueName: \"kubernetes.io/projected/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-kube-api-access-9rmfb\") pod \"nova-api-0\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " pod="openstack/nova-api-0" Dec 06 03:31:17 crc kubenswrapper[4801]: I1206 03:31:17.050407 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:17 crc kubenswrapper[4801]: I1206 03:31:17.248033 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a679a7-1f1d-450c-9e3c-4950851a3bf3" path="/var/lib/kubelet/pods/61a679a7-1f1d-450c-9e3c-4950851a3bf3/volumes" Dec 06 03:31:17 crc kubenswrapper[4801]: I1206 03:31:17.397608 4801 generic.go:334] "Generic (PLEG): container finished" podID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerID="2ff70614331fa589ba3a7ca547f71af86ca8d306220838332527024261bef8cd" exitCode=0 Dec 06 03:31:17 crc kubenswrapper[4801]: I1206 03:31:17.397663 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerDied","Data":"2ff70614331fa589ba3a7ca547f71af86ca8d306220838332527024261bef8cd"} Dec 06 03:31:17 crc kubenswrapper[4801]: I1206 03:31:17.537484 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:17 crc kubenswrapper[4801]: I1206 03:31:17.919999 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.061954 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-run-httpd\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062085 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-combined-ca-bundle\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062172 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xt8zg\" (UniqueName: \"kubernetes.io/projected/f97bc710-46ba-46a0-bdc8-038e22e68a8f-kube-api-access-xt8zg\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062240 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-ceilometer-tls-certs\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062263 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-config-data\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062287 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-scripts\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062322 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-sg-core-conf-yaml\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062348 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-log-httpd\") pod \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\" (UID: \"f97bc710-46ba-46a0-bdc8-038e22e68a8f\") " Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.062502 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.063495 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.064097 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.069609 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-scripts" (OuterVolumeSpecName: "scripts") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.070193 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f97bc710-46ba-46a0-bdc8-038e22e68a8f-kube-api-access-xt8zg" (OuterVolumeSpecName: "kube-api-access-xt8zg") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "kube-api-access-xt8zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.090298 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.143302 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.144805 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.165556 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.165599 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f97bc710-46ba-46a0-bdc8-038e22e68a8f-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.165611 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.165626 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xt8zg\" (UniqueName: \"kubernetes.io/projected/f97bc710-46ba-46a0-bdc8-038e22e68a8f-kube-api-access-xt8zg\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.165669 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.165680 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.177527 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-config-data" (OuterVolumeSpecName: "config-data") pod "f97bc710-46ba-46a0-bdc8-038e22e68a8f" (UID: "f97bc710-46ba-46a0-bdc8-038e22e68a8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.272250 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f97bc710-46ba-46a0-bdc8-038e22e68a8f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.409441 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4209f8db-6ed7-461f-9b54-f109fc4e7ce5","Type":"ContainerStarted","Data":"fd09b56b109338be3da989d62dbc3025f1371ec7e61a55e59ae350e8ebbee741"} Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.411495 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f97bc710-46ba-46a0-bdc8-038e22e68a8f","Type":"ContainerDied","Data":"6ab8a71d24c52a8438743acd21e0d528a6bc10cc974e94959b943ffff447fbd6"} Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.411546 4801 scope.go:117] "RemoveContainer" containerID="3243113ab86709f8248c7efa7fce2630a3305dd299d3c3212df806c8e6aa3d76" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.411548 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.433287 4801 scope.go:117] "RemoveContainer" containerID="8f1835da72045a39f1a0d0b8b3239379b36aa3b46233bdaea047ae9baf3d9ace" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.454944 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.458811 4801 scope.go:117] "RemoveContainer" containerID="2ff70614331fa589ba3a7ca547f71af86ca8d306220838332527024261bef8cd" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.463416 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.481735 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:31:18 crc kubenswrapper[4801]: E1206 03:31:18.482145 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="proxy-httpd" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482162 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="proxy-httpd" Dec 06 03:31:18 crc kubenswrapper[4801]: E1206 03:31:18.482220 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-notification-agent" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482228 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-notification-agent" Dec 06 03:31:18 crc kubenswrapper[4801]: E1206 03:31:18.482266 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="sg-core" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482273 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="sg-core" Dec 06 03:31:18 crc kubenswrapper[4801]: E1206 03:31:18.482284 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-central-agent" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482291 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-central-agent" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482468 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-central-agent" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482481 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="ceilometer-notification-agent" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482493 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="proxy-httpd" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.482505 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" containerName="sg-core" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.483943 4801 scope.go:117] "RemoveContainer" containerID="e01fbe63de003ac5929a870ea1272614f2df929df74a1d10851c3463e550f579" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.485649 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.487800 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.488668 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.488698 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.495192 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577546 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577598 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-scripts\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577629 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577663 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-config-data\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577702 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-log-httpd\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577809 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6qq\" (UniqueName: \"kubernetes.io/projected/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-kube-api-access-wt6qq\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577846 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.577873 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-run-httpd\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679298 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-log-httpd\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679386 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6qq\" (UniqueName: \"kubernetes.io/projected/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-kube-api-access-wt6qq\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679418 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679440 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-run-httpd\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679465 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679486 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-scripts\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679517 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.679552 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-config-data\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.681263 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-log-httpd\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.682453 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-run-httpd\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.686999 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-scripts\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.693837 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.697357 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6qq\" (UniqueName: \"kubernetes.io/projected/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-kube-api-access-wt6qq\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.698021 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-config-data\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.702314 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.706835 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.753195 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.753242 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.759229 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.778167 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.811567 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.816137 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.876733 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-k2lqg"] Dec 06 03:31:18 crc kubenswrapper[4801]: I1206 03:31:18.877003 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" podUID="f563028e-6f64-4540-9043-f9961c26e81c" containerName="dnsmasq-dns" containerID="cri-o://c684c0bb6dc318947d17ce22631ebd964e31c23c7b053bc867e95f00d742445c" gracePeriod=10 Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.224025 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f97bc710-46ba-46a0-bdc8-038e22e68a8f" path="/var/lib/kubelet/pods/f97bc710-46ba-46a0-bdc8-038e22e68a8f/volumes" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.354116 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.424242 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerStarted","Data":"f26bda35dd6da283bb12ccf214c165e4eade1447e24fe91889bd341eac314ae0"} Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.426032 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4209f8db-6ed7-461f-9b54-f109fc4e7ce5","Type":"ContainerStarted","Data":"67e855fe381a09c37e4d851c35ae5e4755773173b027bb942244018b0e6eee75"} Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.427776 4801 generic.go:334] "Generic (PLEG): container finished" podID="f563028e-6f64-4540-9043-f9961c26e81c" containerID="c684c0bb6dc318947d17ce22631ebd964e31c23c7b053bc867e95f00d742445c" exitCode=0 Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.427881 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" event={"ID":"f563028e-6f64-4540-9043-f9961c26e81c","Type":"ContainerDied","Data":"c684c0bb6dc318947d17ce22631ebd964e31c23c7b053bc867e95f00d742445c"} Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.454949 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.622810 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-x48bz"] Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.624288 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.626672 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.627117 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.640902 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x48bz"] Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.771608 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.771597 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.802888 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-config-data\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.802953 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.803026 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffl9\" (UniqueName: \"kubernetes.io/projected/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-kube-api-access-wffl9\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.803101 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-scripts\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.905036 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffl9\" (UniqueName: \"kubernetes.io/projected/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-kube-api-access-wffl9\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.905124 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-scripts\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.905187 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-config-data\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.905214 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.911131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-scripts\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.911599 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.912890 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-config-data\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.933240 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffl9\" (UniqueName: \"kubernetes.io/projected/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-kube-api-access-wffl9\") pod \"nova-cell1-cell-mapping-x48bz\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.948833 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:19 crc kubenswrapper[4801]: I1206 03:31:19.972349 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.108777 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-config\") pod \"f563028e-6f64-4540-9043-f9961c26e81c\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.109183 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-dns-svc\") pod \"f563028e-6f64-4540-9043-f9961c26e81c\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.109269 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s7r7\" (UniqueName: \"kubernetes.io/projected/f563028e-6f64-4540-9043-f9961c26e81c-kube-api-access-4s7r7\") pod \"f563028e-6f64-4540-9043-f9961c26e81c\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.109292 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-nb\") pod \"f563028e-6f64-4540-9043-f9961c26e81c\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.109316 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-sb\") pod \"f563028e-6f64-4540-9043-f9961c26e81c\" (UID: \"f563028e-6f64-4540-9043-f9961c26e81c\") " Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.129299 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f563028e-6f64-4540-9043-f9961c26e81c-kube-api-access-4s7r7" (OuterVolumeSpecName: "kube-api-access-4s7r7") pod "f563028e-6f64-4540-9043-f9961c26e81c" (UID: "f563028e-6f64-4540-9043-f9961c26e81c"). InnerVolumeSpecName "kube-api-access-4s7r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.184061 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-config" (OuterVolumeSpecName: "config") pod "f563028e-6f64-4540-9043-f9961c26e81c" (UID: "f563028e-6f64-4540-9043-f9961c26e81c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.189902 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f563028e-6f64-4540-9043-f9961c26e81c" (UID: "f563028e-6f64-4540-9043-f9961c26e81c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.214419 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s7r7\" (UniqueName: \"kubernetes.io/projected/f563028e-6f64-4540-9043-f9961c26e81c-kube-api-access-4s7r7\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.214695 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.214826 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.240220 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f563028e-6f64-4540-9043-f9961c26e81c" (UID: "f563028e-6f64-4540-9043-f9961c26e81c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.243784 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f563028e-6f64-4540-9043-f9961c26e81c" (UID: "f563028e-6f64-4540-9043-f9961c26e81c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.316916 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.317228 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f563028e-6f64-4540-9043-f9961c26e81c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.443687 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" event={"ID":"f563028e-6f64-4540-9043-f9961c26e81c","Type":"ContainerDied","Data":"650b8aaa28dba205e14617f0455e8eb3e9676fca4c3838436d616073395d294f"} Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.443786 4801 scope.go:117] "RemoveContainer" containerID="c684c0bb6dc318947d17ce22631ebd964e31c23c7b053bc867e95f00d742445c" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.443977 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-k2lqg" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.455719 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4209f8db-6ed7-461f-9b54-f109fc4e7ce5","Type":"ContainerStarted","Data":"81f46def439e8b0a05e59d8a09646e6e7ce6de397e3855560af2708277d8db91"} Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.476917 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x48bz"] Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.495978 4801 scope.go:117] "RemoveContainer" containerID="5ed7d2ad75d030f874863a39b9d8082d6c286956c6fc5cc03412b370ea1df777" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.498688 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.498670343 podStartE2EDuration="4.498670343s" podCreationTimestamp="2025-12-06 03:31:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:20.488122634 +0000 UTC m=+1533.610730206" watchObservedRunningTime="2025-12-06 03:31:20.498670343 +0000 UTC m=+1533.621277905" Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.534466 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-k2lqg"] Dec 06 03:31:20 crc kubenswrapper[4801]: I1206 03:31:20.544384 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-k2lqg"] Dec 06 03:31:21 crc kubenswrapper[4801]: I1206 03:31:21.229140 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f563028e-6f64-4540-9043-f9961c26e81c" path="/var/lib/kubelet/pods/f563028e-6f64-4540-9043-f9961c26e81c/volumes" Dec 06 03:31:21 crc kubenswrapper[4801]: I1206 03:31:21.471673 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x48bz" event={"ID":"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0","Type":"ContainerStarted","Data":"a28870c1588a4be1495a4c58e352390ee00ab8f982551889d6715a8d6ddabf25"} Dec 06 03:31:21 crc kubenswrapper[4801]: I1206 03:31:21.471892 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x48bz" event={"ID":"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0","Type":"ContainerStarted","Data":"c005009b7c014ec4274327b05423f3a1720f81ee5d8433b9dda29e6e9f258d89"} Dec 06 03:31:21 crc kubenswrapper[4801]: I1206 03:31:21.477994 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerStarted","Data":"688910860aa4bcbf08d703f730a7b469a5edcaaa96b410af76a33a7723132eeb"} Dec 06 03:31:21 crc kubenswrapper[4801]: I1206 03:31:21.490172 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-x48bz" podStartSLOduration=2.490155413 podStartE2EDuration="2.490155413s" podCreationTimestamp="2025-12-06 03:31:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:21.487271854 +0000 UTC m=+1534.609879416" watchObservedRunningTime="2025-12-06 03:31:21.490155413 +0000 UTC m=+1534.612762985" Dec 06 03:31:22 crc kubenswrapper[4801]: I1206 03:31:22.489961 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerStarted","Data":"968ec772f9477e403e8669f666fce906226c348e9801a39f3606f9f85b917192"} Dec 06 03:31:25 crc kubenswrapper[4801]: I1206 03:31:25.540019 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerStarted","Data":"68f83de2b7d363c229a1d3c8b75453b858797dd1200cb4c85726cb27bdf9ee2c"} Dec 06 03:31:26 crc kubenswrapper[4801]: I1206 03:31:26.548981 4801 generic.go:334] "Generic (PLEG): container finished" podID="09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" containerID="a28870c1588a4be1495a4c58e352390ee00ab8f982551889d6715a8d6ddabf25" exitCode=0 Dec 06 03:31:26 crc kubenswrapper[4801]: I1206 03:31:26.549066 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x48bz" event={"ID":"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0","Type":"ContainerDied","Data":"a28870c1588a4be1495a4c58e352390ee00ab8f982551889d6715a8d6ddabf25"} Dec 06 03:31:27 crc kubenswrapper[4801]: I1206 03:31:27.051498 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:31:27 crc kubenswrapper[4801]: I1206 03:31:27.052910 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:31:27 crc kubenswrapper[4801]: I1206 03:31:27.561359 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerStarted","Data":"446fdbf6ad8292e235c8dd770f603bcdfa738ec5e930ae0612f87f657082690e"} Dec 06 03:31:27 crc kubenswrapper[4801]: I1206 03:31:27.970215 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.044744 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-combined-ca-bundle\") pod \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.044882 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-config-data\") pod \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.044928 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-scripts\") pod \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.045027 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wffl9\" (UniqueName: \"kubernetes.io/projected/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-kube-api-access-wffl9\") pod \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\" (UID: \"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0\") " Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.051248 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-kube-api-access-wffl9" (OuterVolumeSpecName: "kube-api-access-wffl9") pod "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" (UID: "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0"). InnerVolumeSpecName "kube-api-access-wffl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.060960 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.182:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.061283 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.182:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.061943 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-scripts" (OuterVolumeSpecName: "scripts") pod "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" (UID: "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.074558 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" (UID: "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.077876 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-config-data" (OuterVolumeSpecName: "config-data") pod "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" (UID: "09a2d0dd-f819-4e34-90f5-04c2d7ac63d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.147423 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.147458 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.147467 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.147475 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wffl9\" (UniqueName: \"kubernetes.io/projected/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0-kube-api-access-wffl9\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.576883 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x48bz" event={"ID":"09a2d0dd-f819-4e34-90f5-04c2d7ac63d0","Type":"ContainerDied","Data":"c005009b7c014ec4274327b05423f3a1720f81ee5d8433b9dda29e6e9f258d89"} Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.577167 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c005009b7c014ec4274327b05423f3a1720f81ee5d8433b9dda29e6e9f258d89" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.576933 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x48bz" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.577343 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.629913 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.309609129 podStartE2EDuration="10.629886566s" podCreationTimestamp="2025-12-06 03:31:18 +0000 UTC" firstStartedPulling="2025-12-06 03:31:19.360623938 +0000 UTC m=+1532.483231510" lastFinishedPulling="2025-12-06 03:31:26.680901375 +0000 UTC m=+1539.803508947" observedRunningTime="2025-12-06 03:31:28.620834938 +0000 UTC m=+1541.743442560" watchObservedRunningTime="2025-12-06 03:31:28.629886566 +0000 UTC m=+1541.752494158" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.764003 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.803688 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.804273 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.804454 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-log" containerID="cri-o://67e855fe381a09c37e4d851c35ae5e4755773173b027bb942244018b0e6eee75" gracePeriod=30 Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.804553 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-api" containerID="cri-o://81f46def439e8b0a05e59d8a09646e6e7ce6de397e3855560af2708277d8db91" gracePeriod=30 Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.819081 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.825707 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.826113 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" containerName="nova-scheduler-scheduler" containerID="cri-o://f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98" gracePeriod=30 Dec 06 03:31:28 crc kubenswrapper[4801]: I1206 03:31:28.879492 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:29 crc kubenswrapper[4801]: I1206 03:31:29.599492 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 03:31:30 crc kubenswrapper[4801]: E1206 03:31:30.436060 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 03:31:30 crc kubenswrapper[4801]: E1206 03:31:30.437799 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 03:31:30 crc kubenswrapper[4801]: E1206 03:31:30.439428 4801 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 03:31:30 crc kubenswrapper[4801]: E1206 03:31:30.439477 4801 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" containerName="nova-scheduler-scheduler" Dec 06 03:31:30 crc kubenswrapper[4801]: I1206 03:31:30.595820 4801 generic.go:334] "Generic (PLEG): container finished" podID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerID="67e855fe381a09c37e4d851c35ae5e4755773173b027bb942244018b0e6eee75" exitCode=143 Dec 06 03:31:30 crc kubenswrapper[4801]: I1206 03:31:30.596002 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-log" containerID="cri-o://ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc" gracePeriod=30 Dec 06 03:31:30 crc kubenswrapper[4801]: I1206 03:31:30.596069 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4209f8db-6ed7-461f-9b54-f109fc4e7ce5","Type":"ContainerDied","Data":"67e855fe381a09c37e4d851c35ae5e4755773173b027bb942244018b0e6eee75"} Dec 06 03:31:30 crc kubenswrapper[4801]: I1206 03:31:30.596271 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-metadata" containerID="cri-o://6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75" gracePeriod=30 Dec 06 03:31:31 crc kubenswrapper[4801]: I1206 03:31:31.603943 4801 generic.go:334] "Generic (PLEG): container finished" podID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" containerID="f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98" exitCode=0 Dec 06 03:31:31 crc kubenswrapper[4801]: I1206 03:31:31.604018 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ddc46c-cdb8-400e-8308-dfd2ce38dee4","Type":"ContainerDied","Data":"f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98"} Dec 06 03:31:31 crc kubenswrapper[4801]: I1206 03:31:31.606480 4801 generic.go:334] "Generic (PLEG): container finished" podID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerID="ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc" exitCode=143 Dec 06 03:31:31 crc kubenswrapper[4801]: I1206 03:31:31.606535 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c","Type":"ContainerDied","Data":"ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc"} Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.148135 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.222537 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-combined-ca-bundle\") pod \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.222858 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-config-data\") pod \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.223137 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrms\" (UniqueName: \"kubernetes.io/projected/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-kube-api-access-6vrms\") pod \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\" (UID: \"40ddc46c-cdb8-400e-8308-dfd2ce38dee4\") " Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.232624 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-kube-api-access-6vrms" (OuterVolumeSpecName: "kube-api-access-6vrms") pod "40ddc46c-cdb8-400e-8308-dfd2ce38dee4" (UID: "40ddc46c-cdb8-400e-8308-dfd2ce38dee4"). InnerVolumeSpecName "kube-api-access-6vrms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.254504 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40ddc46c-cdb8-400e-8308-dfd2ce38dee4" (UID: "40ddc46c-cdb8-400e-8308-dfd2ce38dee4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.263488 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-config-data" (OuterVolumeSpecName: "config-data") pod "40ddc46c-cdb8-400e-8308-dfd2ce38dee4" (UID: "40ddc46c-cdb8-400e-8308-dfd2ce38dee4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.325832 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.325870 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.325881 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrms\" (UniqueName: \"kubernetes.io/projected/40ddc46c-cdb8-400e-8308-dfd2ce38dee4-kube-api-access-6vrms\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.617322 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ddc46c-cdb8-400e-8308-dfd2ce38dee4","Type":"ContainerDied","Data":"7dcd8a73e20c4731cf936a7b77fb23b51fcc32e3dc8289ea6e70684132cb5260"} Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.617384 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.617391 4801 scope.go:117] "RemoveContainer" containerID="f9a0fc7f4a8bb8fee1d953348fd66f8e4bc538abefad78e00abc4969118eea98" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.649551 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.664932 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.673655 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:31:32 crc kubenswrapper[4801]: E1206 03:31:32.674128 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" containerName="nova-scheduler-scheduler" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674147 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" containerName="nova-scheduler-scheduler" Dec 06 03:31:32 crc kubenswrapper[4801]: E1206 03:31:32.674161 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f563028e-6f64-4540-9043-f9961c26e81c" containerName="init" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674168 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f563028e-6f64-4540-9043-f9961c26e81c" containerName="init" Dec 06 03:31:32 crc kubenswrapper[4801]: E1206 03:31:32.674183 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" containerName="nova-manage" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674189 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" containerName="nova-manage" Dec 06 03:31:32 crc kubenswrapper[4801]: E1206 03:31:32.674203 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f563028e-6f64-4540-9043-f9961c26e81c" containerName="dnsmasq-dns" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674209 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f563028e-6f64-4540-9043-f9961c26e81c" containerName="dnsmasq-dns" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674359 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f563028e-6f64-4540-9043-f9961c26e81c" containerName="dnsmasq-dns" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674378 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" containerName="nova-manage" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.674393 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" containerName="nova-scheduler-scheduler" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.675043 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.679384 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.683029 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.732218 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b09a1c-654f-42ab-9f77-012033ce6f13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.732285 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpjhz\" (UniqueName: \"kubernetes.io/projected/a9b09a1c-654f-42ab-9f77-012033ce6f13-kube-api-access-mpjhz\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.732515 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b09a1c-654f-42ab-9f77-012033ce6f13-config-data\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.834737 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpjhz\" (UniqueName: \"kubernetes.io/projected/a9b09a1c-654f-42ab-9f77-012033ce6f13-kube-api-access-mpjhz\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.834839 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b09a1c-654f-42ab-9f77-012033ce6f13-config-data\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.834936 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b09a1c-654f-42ab-9f77-012033ce6f13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.840771 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b09a1c-654f-42ab-9f77-012033ce6f13-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.841111 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b09a1c-654f-42ab-9f77-012033ce6f13-config-data\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.851146 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpjhz\" (UniqueName: \"kubernetes.io/projected/a9b09a1c-654f-42ab-9f77-012033ce6f13-kube-api-access-mpjhz\") pod \"nova-scheduler-0\" (UID: \"a9b09a1c-654f-42ab-9f77-012033ce6f13\") " pod="openstack/nova-scheduler-0" Dec 06 03:31:32 crc kubenswrapper[4801]: I1206 03:31:32.992477 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.236443 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ddc46c-cdb8-400e-8308-dfd2ce38dee4" path="/var/lib/kubelet/pods/40ddc46c-cdb8-400e-8308-dfd2ce38dee4/volumes" Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.422919 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.626671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9b09a1c-654f-42ab-9f77-012033ce6f13","Type":"ContainerStarted","Data":"2188c6a8491112b5b3a1223c2feb4c6842000d0cb4c8d9f892455a6f9597cd28"} Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.628926 4801 generic.go:334] "Generic (PLEG): container finished" podID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerID="81f46def439e8b0a05e59d8a09646e6e7ce6de397e3855560af2708277d8db91" exitCode=0 Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.628971 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4209f8db-6ed7-461f-9b54-f109fc4e7ce5","Type":"ContainerDied","Data":"81f46def439e8b0a05e59d8a09646e6e7ce6de397e3855560af2708277d8db91"} Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.753535 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": dial tcp 10.217.0.179:8775: connect: connection refused" Dec 06 03:31:33 crc kubenswrapper[4801]: I1206 03:31:33.754291 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": dial tcp 10.217.0.179:8775: connect: connection refused" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.128558 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.260299 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-public-tls-certs\") pod \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.260360 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-logs\") pod \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.260402 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-combined-ca-bundle\") pod \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.260490 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-internal-tls-certs\") pod \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.260566 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rmfb\" (UniqueName: \"kubernetes.io/projected/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-kube-api-access-9rmfb\") pod \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.260652 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-config-data\") pod \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\" (UID: \"4209f8db-6ed7-461f-9b54-f109fc4e7ce5\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.261346 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-logs" (OuterVolumeSpecName: "logs") pod "4209f8db-6ed7-461f-9b54-f109fc4e7ce5" (UID: "4209f8db-6ed7-461f-9b54-f109fc4e7ce5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.265557 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-kube-api-access-9rmfb" (OuterVolumeSpecName: "kube-api-access-9rmfb") pod "4209f8db-6ed7-461f-9b54-f109fc4e7ce5" (UID: "4209f8db-6ed7-461f-9b54-f109fc4e7ce5"). InnerVolumeSpecName "kube-api-access-9rmfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.287776 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4209f8db-6ed7-461f-9b54-f109fc4e7ce5" (UID: "4209f8db-6ed7-461f-9b54-f109fc4e7ce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.305995 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-config-data" (OuterVolumeSpecName: "config-data") pod "4209f8db-6ed7-461f-9b54-f109fc4e7ce5" (UID: "4209f8db-6ed7-461f-9b54-f109fc4e7ce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.310579 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4209f8db-6ed7-461f-9b54-f109fc4e7ce5" (UID: "4209f8db-6ed7-461f-9b54-f109fc4e7ce5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.312667 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4209f8db-6ed7-461f-9b54-f109fc4e7ce5" (UID: "4209f8db-6ed7-461f-9b54-f109fc4e7ce5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.362945 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.362992 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rmfb\" (UniqueName: \"kubernetes.io/projected/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-kube-api-access-9rmfb\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.363009 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.363020 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.363033 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.363043 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4209f8db-6ed7-461f-9b54-f109fc4e7ce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.379644 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.464530 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rf9r\" (UniqueName: \"kubernetes.io/projected/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-kube-api-access-2rf9r\") pod \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.464981 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-config-data\") pod \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.465011 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-combined-ca-bundle\") pod \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.465047 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-nova-metadata-tls-certs\") pod \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.465111 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-logs\") pod \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\" (UID: \"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c\") " Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.465516 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-logs" (OuterVolumeSpecName: "logs") pod "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" (UID: "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.472970 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-kube-api-access-2rf9r" (OuterVolumeSpecName: "kube-api-access-2rf9r") pod "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" (UID: "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c"). InnerVolumeSpecName "kube-api-access-2rf9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.488543 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" (UID: "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.489724 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-config-data" (OuterVolumeSpecName: "config-data") pod "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" (UID: "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.509222 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" (UID: "0ef16fc6-473c-4c44-83b5-21a6fdd0a93c"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.573014 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.573062 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.573076 4801 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.573090 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-logs\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.573101 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rf9r\" (UniqueName: \"kubernetes.io/projected/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c-kube-api-access-2rf9r\") on node \"crc\" DevicePath \"\"" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.638217 4801 generic.go:334] "Generic (PLEG): container finished" podID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerID="6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75" exitCode=0 Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.638301 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.638560 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c","Type":"ContainerDied","Data":"6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75"} Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.638617 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0ef16fc6-473c-4c44-83b5-21a6fdd0a93c","Type":"ContainerDied","Data":"56e98d974adffaa211ef3a90ce5081a2c6f60f2dc801286c8cd3ca428717a73b"} Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.638645 4801 scope.go:117] "RemoveContainer" containerID="6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.640877 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4209f8db-6ed7-461f-9b54-f109fc4e7ce5","Type":"ContainerDied","Data":"fd09b56b109338be3da989d62dbc3025f1371ec7e61a55e59ae350e8ebbee741"} Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.640962 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.644873 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9b09a1c-654f-42ab-9f77-012033ce6f13","Type":"ContainerStarted","Data":"ca1ce452797271fa5f5a9e2c6ee88e33360da6bd8a75799daf8c9c4ddd5af7b7"} Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.661769 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.661740951 podStartE2EDuration="2.661740951s" podCreationTimestamp="2025-12-06 03:31:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:34.661068143 +0000 UTC m=+1547.783675735" watchObservedRunningTime="2025-12-06 03:31:34.661740951 +0000 UTC m=+1547.784348523" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.668123 4801 scope.go:117] "RemoveContainer" containerID="ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.704192 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.723936 4801 scope.go:117] "RemoveContainer" containerID="6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.724074 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: E1206 03:31:34.733882 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75\": container with ID starting with 6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75 not found: ID does not exist" containerID="6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.733933 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75"} err="failed to get container status \"6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75\": rpc error: code = NotFound desc = could not find container \"6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75\": container with ID starting with 6edf63fc8d4f1ba2a64553a8c4f63e82ffa2eec3ebb78d6376a5963d5ed96f75 not found: ID does not exist" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.733957 4801 scope.go:117] "RemoveContainer" containerID="ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.738922 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: E1206 03:31:34.739956 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc\": container with ID starting with ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc not found: ID does not exist" containerID="ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.739999 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc"} err="failed to get container status \"ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc\": rpc error: code = NotFound desc = could not find container \"ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc\": container with ID starting with ff48f99f9d87ccae760baa5195bc80d3a8e7367f226cfe5e7186c25fac3325bc not found: ID does not exist" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.740029 4801 scope.go:117] "RemoveContainer" containerID="81f46def439e8b0a05e59d8a09646e6e7ce6de397e3855560af2708277d8db91" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.754071 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.774039 4801 scope.go:117] "RemoveContainer" containerID="67e855fe381a09c37e4d851c35ae5e4755773173b027bb942244018b0e6eee75" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.777615 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: E1206 03:31:34.777972 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-api" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.777990 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-api" Dec 06 03:31:34 crc kubenswrapper[4801]: E1206 03:31:34.778002 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-metadata" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778008 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-metadata" Dec 06 03:31:34 crc kubenswrapper[4801]: E1206 03:31:34.778022 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-log" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778028 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-log" Dec 06 03:31:34 crc kubenswrapper[4801]: E1206 03:31:34.778045 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-log" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778050 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-log" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778232 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-api" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778257 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-metadata" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778264 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" containerName="nova-api-log" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.778274 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" containerName="nova-metadata-log" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.779309 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.783483 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.788667 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.798916 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.807825 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.809331 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.813247 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.813345 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.813582 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.814338 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-config-data\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877336 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cde994a7-1f23-4b09-8f61-a7f5f3393960-logs\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877363 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877493 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877663 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877730 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-config-data\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877816 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877835 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-public-tls-certs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877861 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f189c4d3-554b-479b-ba60-7abc6dc13161-logs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.877947 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grfmf\" (UniqueName: \"kubernetes.io/projected/cde994a7-1f23-4b09-8f61-a7f5f3393960-kube-api-access-grfmf\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.878118 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jnt\" (UniqueName: \"kubernetes.io/projected/f189c4d3-554b-479b-ba60-7abc6dc13161-kube-api-access-x2jnt\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.979848 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cde994a7-1f23-4b09-8f61-a7f5f3393960-logs\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.979899 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.979932 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.979966 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.979993 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-config-data\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.980018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.980031 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-public-tls-certs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.980055 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f189c4d3-554b-479b-ba60-7abc6dc13161-logs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.980098 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grfmf\" (UniqueName: \"kubernetes.io/projected/cde994a7-1f23-4b09-8f61-a7f5f3393960-kube-api-access-grfmf\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.980179 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jnt\" (UniqueName: \"kubernetes.io/projected/f189c4d3-554b-479b-ba60-7abc6dc13161-kube-api-access-x2jnt\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.980221 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-config-data\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.982563 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f189c4d3-554b-479b-ba60-7abc6dc13161-logs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.982890 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cde994a7-1f23-4b09-8f61-a7f5f3393960-logs\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.984820 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.985899 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.986178 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-config-data\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.986226 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-config-data\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.986276 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.987232 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f189c4d3-554b-479b-ba60-7abc6dc13161-public-tls-certs\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:34 crc kubenswrapper[4801]: I1206 03:31:34.988891 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cde994a7-1f23-4b09-8f61-a7f5f3393960-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.001661 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grfmf\" (UniqueName: \"kubernetes.io/projected/cde994a7-1f23-4b09-8f61-a7f5f3393960-kube-api-access-grfmf\") pod \"nova-metadata-0\" (UID: \"cde994a7-1f23-4b09-8f61-a7f5f3393960\") " pod="openstack/nova-metadata-0" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.002464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jnt\" (UniqueName: \"kubernetes.io/projected/f189c4d3-554b-479b-ba60-7abc6dc13161-kube-api-access-x2jnt\") pod \"nova-api-0\" (UID: \"f189c4d3-554b-479b-ba60-7abc6dc13161\") " pod="openstack/nova-api-0" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.104259 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.130285 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.251670 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef16fc6-473c-4c44-83b5-21a6fdd0a93c" path="/var/lib/kubelet/pods/0ef16fc6-473c-4c44-83b5-21a6fdd0a93c/volumes" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.253211 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4209f8db-6ed7-461f-9b54-f109fc4e7ce5" path="/var/lib/kubelet/pods/4209f8db-6ed7-461f-9b54-f109fc4e7ce5/volumes" Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.601857 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.664005 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cde994a7-1f23-4b09-8f61-a7f5f3393960","Type":"ContainerStarted","Data":"61a150345807bd328f2a5aacbb84e75905638784372f0441ea722ece1b73fcb3"} Dec 06 03:31:35 crc kubenswrapper[4801]: I1206 03:31:35.693602 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 03:31:35 crc kubenswrapper[4801]: W1206 03:31:35.701626 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf189c4d3_554b_479b_ba60_7abc6dc13161.slice/crio-0a9347f1639472d881cdde33df13c1f140d11e3626260b9fedcc2a8a4fec8d5b WatchSource:0}: Error finding container 0a9347f1639472d881cdde33df13c1f140d11e3626260b9fedcc2a8a4fec8d5b: Status 404 returned error can't find the container with id 0a9347f1639472d881cdde33df13c1f140d11e3626260b9fedcc2a8a4fec8d5b Dec 06 03:31:36 crc kubenswrapper[4801]: I1206 03:31:36.691212 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cde994a7-1f23-4b09-8f61-a7f5f3393960","Type":"ContainerStarted","Data":"810ec768742bbb307aeb1caa45687fa31b0f18a28a2c6ef972f4853be73e1aa6"} Dec 06 03:31:36 crc kubenswrapper[4801]: I1206 03:31:36.694928 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f189c4d3-554b-479b-ba60-7abc6dc13161","Type":"ContainerStarted","Data":"6397ff3e841e01ef819a108cb429896ea147ca1ec214717076b81f0ab12324a1"} Dec 06 03:31:36 crc kubenswrapper[4801]: I1206 03:31:36.694956 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f189c4d3-554b-479b-ba60-7abc6dc13161","Type":"ContainerStarted","Data":"0a9347f1639472d881cdde33df13c1f140d11e3626260b9fedcc2a8a4fec8d5b"} Dec 06 03:31:37 crc kubenswrapper[4801]: I1206 03:31:37.704378 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cde994a7-1f23-4b09-8f61-a7f5f3393960","Type":"ContainerStarted","Data":"76177e21a14db0e358e902ef46e24b5ef83fd3be7bac42db01360098884f9784"} Dec 06 03:31:37 crc kubenswrapper[4801]: I1206 03:31:37.707011 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f189c4d3-554b-479b-ba60-7abc6dc13161","Type":"ContainerStarted","Data":"8d156c4eee55e0a21c9add4928fd419aa17d42afa2ad680282640ab7da54321b"} Dec 06 03:31:37 crc kubenswrapper[4801]: I1206 03:31:37.724433 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.724416319 podStartE2EDuration="3.724416319s" podCreationTimestamp="2025-12-06 03:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:37.720660726 +0000 UTC m=+1550.843268298" watchObservedRunningTime="2025-12-06 03:31:37.724416319 +0000 UTC m=+1550.847023891" Dec 06 03:31:37 crc kubenswrapper[4801]: I1206 03:31:37.744936 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.744914311 podStartE2EDuration="3.744914311s" podCreationTimestamp="2025-12-06 03:31:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:31:37.740289223 +0000 UTC m=+1550.862896795" watchObservedRunningTime="2025-12-06 03:31:37.744914311 +0000 UTC m=+1550.867521883" Dec 06 03:31:37 crc kubenswrapper[4801]: I1206 03:31:37.992781 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 03:31:40 crc kubenswrapper[4801]: I1206 03:31:40.105306 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 03:31:40 crc kubenswrapper[4801]: I1206 03:31:40.105663 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 03:31:41 crc kubenswrapper[4801]: I1206 03:31:41.169862 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:31:41 crc kubenswrapper[4801]: I1206 03:31:41.169927 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:31:42 crc kubenswrapper[4801]: I1206 03:31:42.992936 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 03:31:43 crc kubenswrapper[4801]: I1206 03:31:43.018696 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 03:31:43 crc kubenswrapper[4801]: I1206 03:31:43.789011 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 03:31:45 crc kubenswrapper[4801]: I1206 03:31:45.104421 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 03:31:45 crc kubenswrapper[4801]: I1206 03:31:45.104711 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 03:31:45 crc kubenswrapper[4801]: I1206 03:31:45.131116 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:31:45 crc kubenswrapper[4801]: I1206 03:31:45.131221 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 03:31:46 crc kubenswrapper[4801]: I1206 03:31:46.116888 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cde994a7-1f23-4b09-8f61-a7f5f3393960" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.186:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:46 crc kubenswrapper[4801]: I1206 03:31:46.116917 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cde994a7-1f23-4b09-8f61-a7f5f3393960" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.186:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:46 crc kubenswrapper[4801]: I1206 03:31:46.142985 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f189c4d3-554b-479b-ba60-7abc6dc13161" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:46 crc kubenswrapper[4801]: I1206 03:31:46.143008 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f189c4d3-554b-479b-ba60-7abc6dc13161" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 03:31:48 crc kubenswrapper[4801]: I1206 03:31:48.825974 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.110298 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.111079 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.117043 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.118471 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.157923 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.158108 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.158490 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.158540 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.185191 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 03:31:55 crc kubenswrapper[4801]: I1206 03:31:55.195330 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.088318 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vcf5"] Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.091597 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.100007 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vcf5"] Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.157257 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmb6b\" (UniqueName: \"kubernetes.io/projected/782fa367-ebf7-4e7a-b6d6-2a705ae17358-kube-api-access-xmb6b\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.157400 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-catalog-content\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.157466 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-utilities\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.259514 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-catalog-content\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.259575 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-utilities\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.259714 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmb6b\" (UniqueName: \"kubernetes.io/projected/782fa367-ebf7-4e7a-b6d6-2a705ae17358-kube-api-access-xmb6b\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.260092 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-catalog-content\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.260372 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-utilities\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.281152 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmb6b\" (UniqueName: \"kubernetes.io/projected/782fa367-ebf7-4e7a-b6d6-2a705ae17358-kube-api-access-xmb6b\") pod \"redhat-marketplace-4vcf5\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.413565 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.894368 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vcf5"] Dec 06 03:32:00 crc kubenswrapper[4801]: I1206 03:32:00.980433 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerStarted","Data":"326d596f0e86d47d2549348c977accb84965761c61d61d94234f1880c69e0c9d"} Dec 06 03:32:02 crc kubenswrapper[4801]: I1206 03:32:02.998523 4801 generic.go:334] "Generic (PLEG): container finished" podID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerID="5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1" exitCode=0 Dec 06 03:32:02 crc kubenswrapper[4801]: I1206 03:32:02.998739 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerDied","Data":"5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1"} Dec 06 03:32:03 crc kubenswrapper[4801]: I1206 03:32:03.370341 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:32:04 crc kubenswrapper[4801]: I1206 03:32:04.009931 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerStarted","Data":"8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38"} Dec 06 03:32:04 crc kubenswrapper[4801]: I1206 03:32:04.849589 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:32:05 crc kubenswrapper[4801]: I1206 03:32:05.022746 4801 generic.go:334] "Generic (PLEG): container finished" podID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerID="8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38" exitCode=0 Dec 06 03:32:05 crc kubenswrapper[4801]: I1206 03:32:05.022824 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerDied","Data":"8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38"} Dec 06 03:32:06 crc kubenswrapper[4801]: I1206 03:32:06.044932 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerStarted","Data":"26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c"} Dec 06 03:32:06 crc kubenswrapper[4801]: I1206 03:32:06.081648 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vcf5" podStartSLOduration=3.584665867 podStartE2EDuration="6.081628153s" podCreationTimestamp="2025-12-06 03:32:00 +0000 UTC" firstStartedPulling="2025-12-06 03:32:03.001027685 +0000 UTC m=+1576.123635257" lastFinishedPulling="2025-12-06 03:32:05.497989971 +0000 UTC m=+1578.620597543" observedRunningTime="2025-12-06 03:32:06.076284017 +0000 UTC m=+1579.198891589" watchObservedRunningTime="2025-12-06 03:32:06.081628153 +0000 UTC m=+1579.204235725" Dec 06 03:32:07 crc kubenswrapper[4801]: I1206 03:32:07.980775 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="rabbitmq" containerID="cri-o://a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd" gracePeriod=604796 Dec 06 03:32:10 crc kubenswrapper[4801]: I1206 03:32:10.413843 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:10 crc kubenswrapper[4801]: I1206 03:32:10.414280 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:10 crc kubenswrapper[4801]: I1206 03:32:10.477902 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:10 crc kubenswrapper[4801]: I1206 03:32:10.827411 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="rabbitmq" containerID="cri-o://d961461802ea59de4613b751544f36808463a49652856db4d6b72d50398ef750" gracePeriod=604795 Dec 06 03:32:11 crc kubenswrapper[4801]: I1206 03:32:11.127195 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:11 crc kubenswrapper[4801]: I1206 03:32:11.167600 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vcf5"] Dec 06 03:32:11 crc kubenswrapper[4801]: I1206 03:32:11.169389 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:32:11 crc kubenswrapper[4801]: I1206 03:32:11.169437 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:32:12 crc kubenswrapper[4801]: I1206 03:32:12.695430 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Dec 06 03:32:13 crc kubenswrapper[4801]: I1206 03:32:13.062162 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 06 03:32:13 crc kubenswrapper[4801]: I1206 03:32:13.107816 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4vcf5" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="registry-server" containerID="cri-o://26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c" gracePeriod=2 Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.699677 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.813324 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-utilities\") pod \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.813815 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-catalog-content\") pod \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.813873 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmb6b\" (UniqueName: \"kubernetes.io/projected/782fa367-ebf7-4e7a-b6d6-2a705ae17358-kube-api-access-xmb6b\") pod \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\" (UID: \"782fa367-ebf7-4e7a-b6d6-2a705ae17358\") " Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.814964 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-utilities" (OuterVolumeSpecName: "utilities") pod "782fa367-ebf7-4e7a-b6d6-2a705ae17358" (UID: "782fa367-ebf7-4e7a-b6d6-2a705ae17358"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.819593 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782fa367-ebf7-4e7a-b6d6-2a705ae17358-kube-api-access-xmb6b" (OuterVolumeSpecName: "kube-api-access-xmb6b") pod "782fa367-ebf7-4e7a-b6d6-2a705ae17358" (UID: "782fa367-ebf7-4e7a-b6d6-2a705ae17358"). InnerVolumeSpecName "kube-api-access-xmb6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.915942 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:14 crc kubenswrapper[4801]: I1206 03:32:14.915971 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmb6b\" (UniqueName: \"kubernetes.io/projected/782fa367-ebf7-4e7a-b6d6-2a705ae17358-kube-api-access-xmb6b\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.001256 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "782fa367-ebf7-4e7a-b6d6-2a705ae17358" (UID: "782fa367-ebf7-4e7a-b6d6-2a705ae17358"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.017998 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/782fa367-ebf7-4e7a-b6d6-2a705ae17358-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.126790 4801 generic.go:334] "Generic (PLEG): container finished" podID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerID="26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c" exitCode=0 Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.126854 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerDied","Data":"26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c"} Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.126893 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vcf5" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.126935 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vcf5" event={"ID":"782fa367-ebf7-4e7a-b6d6-2a705ae17358","Type":"ContainerDied","Data":"326d596f0e86d47d2549348c977accb84965761c61d61d94234f1880c69e0c9d"} Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.126962 4801 scope.go:117] "RemoveContainer" containerID="26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.146789 4801 scope.go:117] "RemoveContainer" containerID="8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.174963 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vcf5"] Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.182519 4801 scope.go:117] "RemoveContainer" containerID="5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.185475 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vcf5"] Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.211814 4801 scope.go:117] "RemoveContainer" containerID="26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c" Dec 06 03:32:15 crc kubenswrapper[4801]: E1206 03:32:15.212406 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c\": container with ID starting with 26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c not found: ID does not exist" containerID="26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.212446 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c"} err="failed to get container status \"26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c\": rpc error: code = NotFound desc = could not find container \"26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c\": container with ID starting with 26ccb0d0a92e850af97c92faba6be3a48e92dec27e9c128852aa875d4493859c not found: ID does not exist" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.212471 4801 scope.go:117] "RemoveContainer" containerID="8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38" Dec 06 03:32:15 crc kubenswrapper[4801]: E1206 03:32:15.212846 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38\": container with ID starting with 8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38 not found: ID does not exist" containerID="8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.212887 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38"} err="failed to get container status \"8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38\": rpc error: code = NotFound desc = could not find container \"8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38\": container with ID starting with 8ab17597c401efc143a94893fb6b537d407473545ba2b535566754ce0eb74a38 not found: ID does not exist" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.212914 4801 scope.go:117] "RemoveContainer" containerID="5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1" Dec 06 03:32:15 crc kubenswrapper[4801]: E1206 03:32:15.213261 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1\": container with ID starting with 5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1 not found: ID does not exist" containerID="5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.213309 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1"} err="failed to get container status \"5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1\": rpc error: code = NotFound desc = could not find container \"5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1\": container with ID starting with 5fb582d4a42dab2639741094f6d004949e01d0b3814efb5e38d1ec5a1796aad1 not found: ID does not exist" Dec 06 03:32:15 crc kubenswrapper[4801]: I1206 03:32:15.225547 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" path="/var/lib/kubelet/pods/782fa367-ebf7-4e7a-b6d6-2a705ae17358/volumes" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.037079 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.139647 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-tls\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.139722 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-config-data\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.139792 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86scx\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-kube-api-access-86scx\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.139828 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-pod-info\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.139911 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-erlang-cookie\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.139996 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-server-conf\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140019 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-plugins-conf\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140045 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-erlang-cookie-secret\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140123 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-confd\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140164 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-plugins\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140199 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\" (UID: \"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\") " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140854 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.140997 4801 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.141151 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.147928 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.148731 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-pod-info" (OuterVolumeSpecName: "pod-info") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.148728 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.154392 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-kube-api-access-86scx" (OuterVolumeSpecName: "kube-api-access-86scx") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "kube-api-access-86scx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.155107 4801 generic.go:334] "Generic (PLEG): container finished" podID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerID="a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd" exitCode=0 Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.155252 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e01c6fa-4dee-4835-a73d-30cd5af1a83f","Type":"ContainerDied","Data":"a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd"} Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.155287 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9e01c6fa-4dee-4835-a73d-30cd5af1a83f","Type":"ContainerDied","Data":"4b1de799b98c21575b5863bdc94b152ffa87cb91e523369b79225eecf3db34f8"} Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.155308 4801 scope.go:117] "RemoveContainer" containerID="a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.155439 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.157232 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.160370 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.195451 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-config-data" (OuterVolumeSpecName: "config-data") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.226362 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-server-conf" (OuterVolumeSpecName: "server-conf") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245662 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245710 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245720 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245729 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245739 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86scx\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-kube-api-access-86scx\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245753 4801 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245774 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245782 4801 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.245792 4801 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.284112 4801 scope.go:117] "RemoveContainer" containerID="fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.286350 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.306746 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" (UID: "9e01c6fa-4dee-4835-a73d-30cd5af1a83f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.309902 4801 scope.go:117] "RemoveContainer" containerID="a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.310329 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd\": container with ID starting with a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd not found: ID does not exist" containerID="a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.310371 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd"} err="failed to get container status \"a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd\": rpc error: code = NotFound desc = could not find container \"a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd\": container with ID starting with a766efe48f041306f8072a16f52dbbd0290f633a2a059b41510d9a8d754825dd not found: ID does not exist" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.310399 4801 scope.go:117] "RemoveContainer" containerID="fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.310821 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795\": container with ID starting with fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795 not found: ID does not exist" containerID="fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.310850 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795"} err="failed to get container status \"fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795\": rpc error: code = NotFound desc = could not find container \"fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795\": container with ID starting with fa7188fdec4dfc55a7023ee24b2d4e59434dd7c770217bdd300a0903e29fa795 not found: ID does not exist" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.348493 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9e01c6fa-4dee-4835-a73d-30cd5af1a83f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.348840 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.489798 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.498579 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.522317 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.523571 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="extract-content" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523592 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="extract-content" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.523643 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="rabbitmq" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523653 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="rabbitmq" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.523667 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="extract-utilities" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523672 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="extract-utilities" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.523685 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="registry-server" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523690 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="registry-server" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.523706 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="setup-container" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523711 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="setup-container" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523892 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" containerName="rabbitmq" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.523904 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="782fa367-ebf7-4e7a-b6d6-2a705ae17358" containerName="registry-server" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.527838 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.530279 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.530577 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.530960 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.531095 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.531220 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.531778 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-92ctn" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.532350 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.535965 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654454 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654512 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654580 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654601 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96374aa1-9e52-440e-b058-26ed49f7b0e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654621 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-config-data\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654638 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654654 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwx96\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-kube-api-access-rwx96\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654679 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654692 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654728 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.654790 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96374aa1-9e52-440e-b058-26ed49f7b0e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756200 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756328 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96374aa1-9e52-440e-b058-26ed49f7b0e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756369 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-config-data\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756394 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756419 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwx96\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-kube-api-access-rwx96\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756459 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756483 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756537 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756603 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96374aa1-9e52-440e-b058-26ed49f7b0e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756628 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.756652 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.758086 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.758089 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.758135 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.758650 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.758729 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-config-data\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.759378 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/96374aa1-9e52-440e-b058-26ed49f7b0e9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.764931 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.765042 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/96374aa1-9e52-440e-b058-26ed49f7b0e9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.765275 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/96374aa1-9e52-440e-b058-26ed49f7b0e9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.787398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.792588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwx96\" (UniqueName: \"kubernetes.io/projected/96374aa1-9e52-440e-b058-26ed49f7b0e9-kube-api-access-rwx96\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.823621 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"96374aa1-9e52-440e-b058-26ed49f7b0e9\") " pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: I1206 03:32:16.866022 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.950489 4801 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 06 03:32:16 crc kubenswrapper[4801]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc): error adding pod openstack_rabbitmq-server-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc" Netns:"/var/run/netns/3a7c0c42-5771-444b-8a5b-4f29300d10fb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9" Path:"" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID "96374aa1-9e52-440e-b058-26ed49f7b0e9" but got "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" from Kube API Dec 06 03:32:16 crc kubenswrapper[4801]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 03:32:16 crc kubenswrapper[4801]: > Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.950921 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 06 03:32:16 crc kubenswrapper[4801]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc): error adding pod openstack_rabbitmq-server-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc" Netns:"/var/run/netns/3a7c0c42-5771-444b-8a5b-4f29300d10fb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9" Path:"" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID "96374aa1-9e52-440e-b058-26ed49f7b0e9" but got "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" from Kube API Dec 06 03:32:16 crc kubenswrapper[4801]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 03:32:16 crc kubenswrapper[4801]: > pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.950941 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 06 03:32:16 crc kubenswrapper[4801]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc): error adding pod openstack_rabbitmq-server-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc" Netns:"/var/run/netns/3a7c0c42-5771-444b-8a5b-4f29300d10fb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9" Path:"" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID "96374aa1-9e52-440e-b058-26ed49f7b0e9" but got "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" from Kube API Dec 06 03:32:16 crc kubenswrapper[4801]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 03:32:16 crc kubenswrapper[4801]: > pod="openstack/rabbitmq-server-0" Dec 06 03:32:16 crc kubenswrapper[4801]: E1206 03:32:16.951093 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"rabbitmq-server-0_openstack(96374aa1-9e52-440e-b058-26ed49f7b0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"rabbitmq-server-0_openstack(96374aa1-9e52-440e-b058-26ed49f7b0e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc): error adding pod openstack_rabbitmq-server-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc\\\" Netns:\\\"/var/run/netns/3a7c0c42-5771-444b-8a5b-4f29300d10fb\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=91f65d454a88e4bd6d70defa9780311e71d7961bdbfc3a0b93cbf9b24b9055dc;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID \\\"96374aa1-9e52-440e-b058-26ed49f7b0e9\\\" but got \\\"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\\\" from Kube API\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/rabbitmq-server-0" podUID="96374aa1-9e52-440e-b058-26ed49f7b0e9" Dec 06 03:32:17 crc kubenswrapper[4801]: I1206 03:32:17.174566 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:17 crc kubenswrapper[4801]: I1206 03:32:17.175857 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:17 crc kubenswrapper[4801]: I1206 03:32:17.225130 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e01c6fa-4dee-4835-a73d-30cd5af1a83f" path="/var/lib/kubelet/pods/9e01c6fa-4dee-4835-a73d-30cd5af1a83f/volumes" Dec 06 03:32:17 crc kubenswrapper[4801]: E1206 03:32:17.269090 4801 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 06 03:32:17 crc kubenswrapper[4801]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5): error adding pod openstack_rabbitmq-server-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5" Netns:"/var/run/netns/846b7258-8a68-497a-927d-845def8d0837" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9" Path:"" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID "96374aa1-9e52-440e-b058-26ed49f7b0e9" but got "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" from Kube API Dec 06 03:32:17 crc kubenswrapper[4801]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 03:32:17 crc kubenswrapper[4801]: > Dec 06 03:32:17 crc kubenswrapper[4801]: E1206 03:32:17.269163 4801 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 06 03:32:17 crc kubenswrapper[4801]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5): error adding pod openstack_rabbitmq-server-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5" Netns:"/var/run/netns/846b7258-8a68-497a-927d-845def8d0837" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9" Path:"" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID "96374aa1-9e52-440e-b058-26ed49f7b0e9" but got "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" from Kube API Dec 06 03:32:17 crc kubenswrapper[4801]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 03:32:17 crc kubenswrapper[4801]: > pod="openstack/rabbitmq-server-0" Dec 06 03:32:17 crc kubenswrapper[4801]: E1206 03:32:17.269189 4801 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 06 03:32:17 crc kubenswrapper[4801]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5): error adding pod openstack_rabbitmq-server-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5" Netns:"/var/run/netns/846b7258-8a68-497a-927d-845def8d0837" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9" Path:"" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID "96374aa1-9e52-440e-b058-26ed49f7b0e9" but got "9e01c6fa-4dee-4835-a73d-30cd5af1a83f" from Kube API Dec 06 03:32:17 crc kubenswrapper[4801]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 06 03:32:17 crc kubenswrapper[4801]: > pod="openstack/rabbitmq-server-0" Dec 06 03:32:17 crc kubenswrapper[4801]: E1206 03:32:17.269261 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"rabbitmq-server-0_openstack(96374aa1-9e52-440e-b058-26ed49f7b0e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"rabbitmq-server-0_openstack(96374aa1-9e52-440e-b058-26ed49f7b0e9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_rabbitmq-server-0_openstack_96374aa1-9e52-440e-b058-26ed49f7b0e9_0(d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5): error adding pod openstack_rabbitmq-server-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5\\\" Netns:\\\"/var/run/netns/846b7258-8a68-497a-927d-845def8d0837\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=rabbitmq-server-0;K8S_POD_INFRA_CONTAINER_ID=d8539540a4a49aae8d6a7a6f7012b575f2ff0b87b877bdb209cb2df682d993c5;K8S_POD_UID=96374aa1-9e52-440e-b058-26ed49f7b0e9\\\" Path:\\\"\\\" ERRORED: error configuring pod [openstack/rabbitmq-server-0] networking: Multus: [openstack/rabbitmq-server-0/96374aa1-9e52-440e-b058-26ed49f7b0e9]: expected pod UID \\\"96374aa1-9e52-440e-b058-26ed49f7b0e9\\\" but got \\\"9e01c6fa-4dee-4835-a73d-30cd5af1a83f\\\" from Kube API\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openstack/rabbitmq-server-0" podUID="96374aa1-9e52-440e-b058-26ed49f7b0e9" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.218148 4801 generic.go:334] "Generic (PLEG): container finished" podID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerID="d961461802ea59de4613b751544f36808463a49652856db4d6b72d50398ef750" exitCode=0 Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.224329 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8d84a21-b2e6-4d69-9f2b-48870e2d1702","Type":"ContainerDied","Data":"d961461802ea59de4613b751544f36808463a49652856db4d6b72d50398ef750"} Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.634048 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.732908 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-plugins-conf\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733369 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-config-data\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733450 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-confd\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733532 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw8p9\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-kube-api-access-gw8p9\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733581 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-server-conf\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733600 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733630 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-tls\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733656 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-erlang-cookie\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733678 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-pod-info\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.733700 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-plugins\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.734015 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.734051 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-erlang-cookie-secret\") pod \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\" (UID: \"b8d84a21-b2e6-4d69-9f2b-48870e2d1702\") " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.734353 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.734540 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.735107 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.735135 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.735147 4801 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.740374 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-pod-info" (OuterVolumeSpecName: "pod-info") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.740388 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.742401 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-kube-api-access-gw8p9" (OuterVolumeSpecName: "kube-api-access-gw8p9") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "kube-api-access-gw8p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.747117 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.751898 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.774423 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-config-data" (OuterVolumeSpecName: "config-data") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.809403 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-server-conf" (OuterVolumeSpecName: "server-conf") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.836545 4801 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.836715 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.836803 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.836862 4801 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.836916 4801 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.836983 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.837059 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw8p9\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-kube-api-access-gw8p9\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.859069 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.878155 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b8d84a21-b2e6-4d69-9f2b-48870e2d1702" (UID: "b8d84a21-b2e6-4d69-9f2b-48870e2d1702"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.940357 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.940419 4801 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8d84a21-b2e6-4d69-9f2b-48870e2d1702-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.947654 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qf5bt"] Dec 06 03:32:21 crc kubenswrapper[4801]: E1206 03:32:21.954261 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="rabbitmq" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.954299 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="rabbitmq" Dec 06 03:32:21 crc kubenswrapper[4801]: E1206 03:32:21.954335 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="setup-container" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.954343 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="setup-container" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.954516 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" containerName="rabbitmq" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.956043 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:21 crc kubenswrapper[4801]: I1206 03:32:21.960876 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qf5bt"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.142905 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-catalog-content\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.143025 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-utilities\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.143172 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksl4q\" (UniqueName: \"kubernetes.io/projected/fab5f952-bc28-4e22-9376-1123b59a34ae-kube-api-access-ksl4q\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.228584 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b8d84a21-b2e6-4d69-9f2b-48870e2d1702","Type":"ContainerDied","Data":"49cc6d4a88a290bb2b65ed5867cd73ab62463f35ff3fcc4785be9a4d0e0be608"} Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.228651 4801 scope.go:117] "RemoveContainer" containerID="d961461802ea59de4613b751544f36808463a49652856db4d6b72d50398ef750" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.228661 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.245150 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-catalog-content\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.245218 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-utilities\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.245319 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksl4q\" (UniqueName: \"kubernetes.io/projected/fab5f952-bc28-4e22-9376-1123b59a34ae-kube-api-access-ksl4q\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.245907 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-catalog-content\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.245980 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-utilities\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.255259 4801 scope.go:117] "RemoveContainer" containerID="103e39ea3c7868e8eeec409ec28e22cd8888fb8366b8635d350b4e3f90de6af7" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.271954 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.272793 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksl4q\" (UniqueName: \"kubernetes.io/projected/fab5f952-bc28-4e22-9376-1123b59a34ae-kube-api-access-ksl4q\") pod \"community-operators-qf5bt\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.286249 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.309491 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.311088 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.312516 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.315499 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.315738 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-72jlq" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.315959 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.316829 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.316950 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.317594 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.317650 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.336703 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.375868 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-wfk4p"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.377328 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.381804 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.416168 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-wfk4p"] Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454005 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454314 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/509f393d-bb6a-47e3-a68c-e598c5b37a1b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454364 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454386 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454422 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454444 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454472 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454500 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/509f393d-bb6a-47e3-a68c-e598c5b37a1b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454540 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.454563 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs76\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-kube-api-access-jhs76\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556746 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556812 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbv6\" (UniqueName: \"kubernetes.io/projected/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-kube-api-access-2fbv6\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556851 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556869 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/509f393d-bb6a-47e3-a68c-e598c5b37a1b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556898 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556955 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhs76\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-kube-api-access-jhs76\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.556986 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-config\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557005 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557021 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557041 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557068 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557088 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/509f393d-bb6a-47e3-a68c-e598c5b37a1b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557115 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557138 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557171 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557201 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557697 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.557966 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.558283 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.576339 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.579361 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/509f393d-bb6a-47e3-a68c-e598c5b37a1b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.580153 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/509f393d-bb6a-47e3-a68c-e598c5b37a1b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.582497 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.582972 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/509f393d-bb6a-47e3-a68c-e598c5b37a1b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.586530 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.597421 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.611120 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhs76\" (UniqueName: \"kubernetes.io/projected/509f393d-bb6a-47e3-a68c-e598c5b37a1b-kube-api-access-jhs76\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.669913 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fbv6\" (UniqueName: \"kubernetes.io/projected/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-kube-api-access-2fbv6\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.669974 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.669997 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.670045 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-config\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.670064 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.670083 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.671879 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.671982 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.673281 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.674957 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-config\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.678291 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.693229 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"509f393d-bb6a-47e3-a68c-e598c5b37a1b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.698244 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fbv6\" (UniqueName: \"kubernetes.io/projected/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-kube-api-access-2fbv6\") pod \"dnsmasq-dns-6447ccbd8f-wfk4p\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.736306 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.937961 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:32:22 crc kubenswrapper[4801]: I1206 03:32:22.955467 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qf5bt"] Dec 06 03:32:23 crc kubenswrapper[4801]: I1206 03:32:23.195801 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-wfk4p"] Dec 06 03:32:23 crc kubenswrapper[4801]: W1206 03:32:23.199333 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5185e674_2e3e_46f4_b5d9_cb61819cd7cd.slice/crio-d5b7e1f2ca83758e043a5a144892dc9ea11e0d7ab9996848fb4319e31d889ecd WatchSource:0}: Error finding container d5b7e1f2ca83758e043a5a144892dc9ea11e0d7ab9996848fb4319e31d889ecd: Status 404 returned error can't find the container with id d5b7e1f2ca83758e043a5a144892dc9ea11e0d7ab9996848fb4319e31d889ecd Dec 06 03:32:23 crc kubenswrapper[4801]: I1206 03:32:23.234595 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8d84a21-b2e6-4d69-9f2b-48870e2d1702" path="/var/lib/kubelet/pods/b8d84a21-b2e6-4d69-9f2b-48870e2d1702/volumes" Dec 06 03:32:23 crc kubenswrapper[4801]: I1206 03:32:23.247570 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" event={"ID":"5185e674-2e3e-46f4-b5d9-cb61819cd7cd","Type":"ContainerStarted","Data":"d5b7e1f2ca83758e043a5a144892dc9ea11e0d7ab9996848fb4319e31d889ecd"} Dec 06 03:32:23 crc kubenswrapper[4801]: I1206 03:32:23.248465 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5bt" event={"ID":"fab5f952-bc28-4e22-9376-1123b59a34ae","Type":"ContainerStarted","Data":"3fd15937fa6bc04fbaec37ef4910e444851f6b3abe635136f60c63e4f6ffa413"} Dec 06 03:32:23 crc kubenswrapper[4801]: I1206 03:32:23.553273 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 03:32:24 crc kubenswrapper[4801]: I1206 03:32:24.259210 4801 generic.go:334] "Generic (PLEG): container finished" podID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerID="768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b" exitCode=0 Dec 06 03:32:24 crc kubenswrapper[4801]: I1206 03:32:24.259260 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" event={"ID":"5185e674-2e3e-46f4-b5d9-cb61819cd7cd","Type":"ContainerDied","Data":"768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b"} Dec 06 03:32:24 crc kubenswrapper[4801]: I1206 03:32:24.263414 4801 generic.go:334] "Generic (PLEG): container finished" podID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerID="4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e" exitCode=0 Dec 06 03:32:24 crc kubenswrapper[4801]: I1206 03:32:24.263559 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5bt" event={"ID":"fab5f952-bc28-4e22-9376-1123b59a34ae","Type":"ContainerDied","Data":"4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e"} Dec 06 03:32:24 crc kubenswrapper[4801]: I1206 03:32:24.266794 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"509f393d-bb6a-47e3-a68c-e598c5b37a1b","Type":"ContainerStarted","Data":"f5513bbac3427060ec2e6828ed7ab6958fcbad7bef90bea04ab69089ebb5a5e8"} Dec 06 03:32:25 crc kubenswrapper[4801]: I1206 03:32:25.278357 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"509f393d-bb6a-47e3-a68c-e598c5b37a1b","Type":"ContainerStarted","Data":"42189448601c6d27e3c37694d88d0644f760f18b3e5e07311e8cec90dab53a18"} Dec 06 03:32:25 crc kubenswrapper[4801]: I1206 03:32:25.282920 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" event={"ID":"5185e674-2e3e-46f4-b5d9-cb61819cd7cd","Type":"ContainerStarted","Data":"50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da"} Dec 06 03:32:25 crc kubenswrapper[4801]: I1206 03:32:25.283070 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:25 crc kubenswrapper[4801]: I1206 03:32:25.328246 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" podStartSLOduration=3.328229438 podStartE2EDuration="3.328229438s" podCreationTimestamp="2025-12-06 03:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:32:25.32138111 +0000 UTC m=+1598.443988702" watchObservedRunningTime="2025-12-06 03:32:25.328229438 +0000 UTC m=+1598.450837010" Dec 06 03:32:26 crc kubenswrapper[4801]: I1206 03:32:26.292596 4801 generic.go:334] "Generic (PLEG): container finished" podID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerID="32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db" exitCode=0 Dec 06 03:32:26 crc kubenswrapper[4801]: I1206 03:32:26.292707 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5bt" event={"ID":"fab5f952-bc28-4e22-9376-1123b59a34ae","Type":"ContainerDied","Data":"32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db"} Dec 06 03:32:27 crc kubenswrapper[4801]: I1206 03:32:27.304460 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5bt" event={"ID":"fab5f952-bc28-4e22-9376-1123b59a34ae","Type":"ContainerStarted","Data":"ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46"} Dec 06 03:32:27 crc kubenswrapper[4801]: I1206 03:32:27.333480 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qf5bt" podStartSLOduration=3.82511564 podStartE2EDuration="6.333460358s" podCreationTimestamp="2025-12-06 03:32:21 +0000 UTC" firstStartedPulling="2025-12-06 03:32:24.264964662 +0000 UTC m=+1597.387572234" lastFinishedPulling="2025-12-06 03:32:26.77330938 +0000 UTC m=+1599.895916952" observedRunningTime="2025-12-06 03:32:27.33094874 +0000 UTC m=+1600.453556312" watchObservedRunningTime="2025-12-06 03:32:27.333460358 +0000 UTC m=+1600.456067930" Dec 06 03:32:29 crc kubenswrapper[4801]: I1206 03:32:29.212147 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:29 crc kubenswrapper[4801]: I1206 03:32:29.213049 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 03:32:29 crc kubenswrapper[4801]: I1206 03:32:29.640905 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 03:32:29 crc kubenswrapper[4801]: W1206 03:32:29.643790 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96374aa1_9e52_440e_b058_26ed49f7b0e9.slice/crio-a12889a4c47e25ee3f33f96fcc61cee01f72d31d07b755b37d06d2bad7c580cc WatchSource:0}: Error finding container a12889a4c47e25ee3f33f96fcc61cee01f72d31d07b755b37d06d2bad7c580cc: Status 404 returned error can't find the container with id a12889a4c47e25ee3f33f96fcc61cee01f72d31d07b755b37d06d2bad7c580cc Dec 06 03:32:30 crc kubenswrapper[4801]: I1206 03:32:30.332982 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96374aa1-9e52-440e-b058-26ed49f7b0e9","Type":"ContainerStarted","Data":"a12889a4c47e25ee3f33f96fcc61cee01f72d31d07b755b37d06d2bad7c580cc"} Dec 06 03:32:31 crc kubenswrapper[4801]: I1206 03:32:31.343483 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96374aa1-9e52-440e-b058-26ed49f7b0e9","Type":"ContainerStarted","Data":"9e37841f5b1a213993cc1cad518567d70f3b693e7edd8610347091c5309c148a"} Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.310119 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.310367 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.353060 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.402461 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.590309 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qf5bt"] Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.739028 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.803491 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-57pwn"] Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.804131 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" containerName="dnsmasq-dns" containerID="cri-o://c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e" gracePeriod=10 Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.985821 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-zg8p2"] Dec 06 03:32:32 crc kubenswrapper[4801]: I1206 03:32:32.988177 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.000561 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-zg8p2"] Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.068197 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.068238 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.068288 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmrzv\" (UniqueName: \"kubernetes.io/projected/080aa27a-3c47-4c6f-bced-06f2ebab0d84-kube-api-access-zmrzv\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.068329 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-config\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.068360 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.068377 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.169803 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.169860 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.169917 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmrzv\" (UniqueName: \"kubernetes.io/projected/080aa27a-3c47-4c6f-bced-06f2ebab0d84-kube-api-access-zmrzv\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.169977 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-config\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.170018 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.170038 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.171078 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.171104 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.172656 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-config\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.172861 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.173406 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.207225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmrzv\" (UniqueName: \"kubernetes.io/projected/080aa27a-3c47-4c6f-bced-06f2ebab0d84-kube-api-access-zmrzv\") pod \"dnsmasq-dns-864d5fc68c-zg8p2\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.338804 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.359782 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.366912 4801 generic.go:334] "Generic (PLEG): container finished" podID="5ba81752-9263-4679-9908-c8f6eecd163d" containerID="c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e" exitCode=0 Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.367155 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" event={"ID":"5ba81752-9263-4679-9908-c8f6eecd163d","Type":"ContainerDied","Data":"c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e"} Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.367241 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" event={"ID":"5ba81752-9263-4679-9908-c8f6eecd163d","Type":"ContainerDied","Data":"aef700c21f2e56db8f48c5ce0f572494fe92b9b5c0f40f585cab1a5d671c6ac2"} Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.367317 4801 scope.go:117] "RemoveContainer" containerID="c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.367535 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-57pwn" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.373548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-869ms\" (UniqueName: \"kubernetes.io/projected/5ba81752-9263-4679-9908-c8f6eecd163d-kube-api-access-869ms\") pod \"5ba81752-9263-4679-9908-c8f6eecd163d\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.373676 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-dns-svc\") pod \"5ba81752-9263-4679-9908-c8f6eecd163d\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.373702 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-sb\") pod \"5ba81752-9263-4679-9908-c8f6eecd163d\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.373851 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-nb\") pod \"5ba81752-9263-4679-9908-c8f6eecd163d\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.373909 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-config\") pod \"5ba81752-9263-4679-9908-c8f6eecd163d\" (UID: \"5ba81752-9263-4679-9908-c8f6eecd163d\") " Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.399134 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ba81752-9263-4679-9908-c8f6eecd163d-kube-api-access-869ms" (OuterVolumeSpecName: "kube-api-access-869ms") pod "5ba81752-9263-4679-9908-c8f6eecd163d" (UID: "5ba81752-9263-4679-9908-c8f6eecd163d"). InnerVolumeSpecName "kube-api-access-869ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.418497 4801 scope.go:117] "RemoveContainer" containerID="171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.438030 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5ba81752-9263-4679-9908-c8f6eecd163d" (UID: "5ba81752-9263-4679-9908-c8f6eecd163d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.446333 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5ba81752-9263-4679-9908-c8f6eecd163d" (UID: "5ba81752-9263-4679-9908-c8f6eecd163d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.449411 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-config" (OuterVolumeSpecName: "config") pod "5ba81752-9263-4679-9908-c8f6eecd163d" (UID: "5ba81752-9263-4679-9908-c8f6eecd163d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.459076 4801 scope.go:117] "RemoveContainer" containerID="c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e" Dec 06 03:32:33 crc kubenswrapper[4801]: E1206 03:32:33.459495 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e\": container with ID starting with c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e not found: ID does not exist" containerID="c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.459534 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e"} err="failed to get container status \"c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e\": rpc error: code = NotFound desc = could not find container \"c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e\": container with ID starting with c42e0823024256c56d7c11887cf0f96d0b36150f2ffc370406cd0f5c20584e2e not found: ID does not exist" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.459560 4801 scope.go:117] "RemoveContainer" containerID="171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db" Dec 06 03:32:33 crc kubenswrapper[4801]: E1206 03:32:33.459860 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db\": container with ID starting with 171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db not found: ID does not exist" containerID="171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.459906 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db"} err="failed to get container status \"171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db\": rpc error: code = NotFound desc = could not find container \"171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db\": container with ID starting with 171c60085568e5b984234b84c0a731f798daad1a73bcc4c1fc59e3448c2ae5db not found: ID does not exist" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.472243 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5ba81752-9263-4679-9908-c8f6eecd163d" (UID: "5ba81752-9263-4679-9908-c8f6eecd163d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.476171 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.476492 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.476506 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.476518 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ba81752-9263-4679-9908-c8f6eecd163d-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.476532 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-869ms\" (UniqueName: \"kubernetes.io/projected/5ba81752-9263-4679-9908-c8f6eecd163d-kube-api-access-869ms\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.720937 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-57pwn"] Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.730946 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-57pwn"] Dec 06 03:32:33 crc kubenswrapper[4801]: I1206 03:32:33.832638 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-zg8p2"] Dec 06 03:32:33 crc kubenswrapper[4801]: W1206 03:32:33.834002 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod080aa27a_3c47_4c6f_bced_06f2ebab0d84.slice/crio-fa1f7734b63f4311b9f1c3cf4686e3aa2e9f7d42ff9675d1a32c280d39d2498b WatchSource:0}: Error finding container fa1f7734b63f4311b9f1c3cf4686e3aa2e9f7d42ff9675d1a32c280d39d2498b: Status 404 returned error can't find the container with id fa1f7734b63f4311b9f1c3cf4686e3aa2e9f7d42ff9675d1a32c280d39d2498b Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.376747 4801 generic.go:334] "Generic (PLEG): container finished" podID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerID="a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5" exitCode=0 Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.376843 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" event={"ID":"080aa27a-3c47-4c6f-bced-06f2ebab0d84","Type":"ContainerDied","Data":"a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5"} Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.377182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" event={"ID":"080aa27a-3c47-4c6f-bced-06f2ebab0d84","Type":"ContainerStarted","Data":"fa1f7734b63f4311b9f1c3cf4686e3aa2e9f7d42ff9675d1a32c280d39d2498b"} Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.378550 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qf5bt" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="registry-server" containerID="cri-o://ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46" gracePeriod=2 Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.784022 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.908158 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksl4q\" (UniqueName: \"kubernetes.io/projected/fab5f952-bc28-4e22-9376-1123b59a34ae-kube-api-access-ksl4q\") pod \"fab5f952-bc28-4e22-9376-1123b59a34ae\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.908292 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-catalog-content\") pod \"fab5f952-bc28-4e22-9376-1123b59a34ae\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.908376 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-utilities\") pod \"fab5f952-bc28-4e22-9376-1123b59a34ae\" (UID: \"fab5f952-bc28-4e22-9376-1123b59a34ae\") " Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.909526 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-utilities" (OuterVolumeSpecName: "utilities") pod "fab5f952-bc28-4e22-9376-1123b59a34ae" (UID: "fab5f952-bc28-4e22-9376-1123b59a34ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.916994 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab5f952-bc28-4e22-9376-1123b59a34ae-kube-api-access-ksl4q" (OuterVolumeSpecName: "kube-api-access-ksl4q") pod "fab5f952-bc28-4e22-9376-1123b59a34ae" (UID: "fab5f952-bc28-4e22-9376-1123b59a34ae"). InnerVolumeSpecName "kube-api-access-ksl4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:34 crc kubenswrapper[4801]: I1206 03:32:34.981631 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fab5f952-bc28-4e22-9376-1123b59a34ae" (UID: "fab5f952-bc28-4e22-9376-1123b59a34ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.010567 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.010619 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fab5f952-bc28-4e22-9376-1123b59a34ae-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.010631 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksl4q\" (UniqueName: \"kubernetes.io/projected/fab5f952-bc28-4e22-9376-1123b59a34ae-kube-api-access-ksl4q\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.222236 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" path="/var/lib/kubelet/pods/5ba81752-9263-4679-9908-c8f6eecd163d/volumes" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.390907 4801 generic.go:334] "Generic (PLEG): container finished" podID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerID="ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46" exitCode=0 Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.390974 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qf5bt" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.390992 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5bt" event={"ID":"fab5f952-bc28-4e22-9376-1123b59a34ae","Type":"ContainerDied","Data":"ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46"} Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.391022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qf5bt" event={"ID":"fab5f952-bc28-4e22-9376-1123b59a34ae","Type":"ContainerDied","Data":"3fd15937fa6bc04fbaec37ef4910e444851f6b3abe635136f60c63e4f6ffa413"} Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.391046 4801 scope.go:117] "RemoveContainer" containerID="ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.396233 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" event={"ID":"080aa27a-3c47-4c6f-bced-06f2ebab0d84","Type":"ContainerStarted","Data":"4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879"} Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.396363 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.420085 4801 scope.go:117] "RemoveContainer" containerID="32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.423290 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qf5bt"] Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.435445 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qf5bt"] Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.447803 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" podStartSLOduration=3.4477447469999998 podStartE2EDuration="3.447744747s" podCreationTimestamp="2025-12-06 03:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:32:35.440049436 +0000 UTC m=+1608.562657008" watchObservedRunningTime="2025-12-06 03:32:35.447744747 +0000 UTC m=+1608.570352319" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.449261 4801 scope.go:117] "RemoveContainer" containerID="4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.497333 4801 scope.go:117] "RemoveContainer" containerID="ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46" Dec 06 03:32:35 crc kubenswrapper[4801]: E1206 03:32:35.497935 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46\": container with ID starting with ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46 not found: ID does not exist" containerID="ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.498003 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46"} err="failed to get container status \"ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46\": rpc error: code = NotFound desc = could not find container \"ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46\": container with ID starting with ea517f0a40d136c428c423c356f228d9035358b1c8b3d4139d8d6731d680da46 not found: ID does not exist" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.498026 4801 scope.go:117] "RemoveContainer" containerID="32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db" Dec 06 03:32:35 crc kubenswrapper[4801]: E1206 03:32:35.498457 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db\": container with ID starting with 32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db not found: ID does not exist" containerID="32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.498479 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db"} err="failed to get container status \"32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db\": rpc error: code = NotFound desc = could not find container \"32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db\": container with ID starting with 32d236172b9f5ae226c728437af950b057d9c1da1ce7f873614a51ad6df7d5db not found: ID does not exist" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.498493 4801 scope.go:117] "RemoveContainer" containerID="4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e" Dec 06 03:32:35 crc kubenswrapper[4801]: E1206 03:32:35.498925 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e\": container with ID starting with 4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e not found: ID does not exist" containerID="4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e" Dec 06 03:32:35 crc kubenswrapper[4801]: I1206 03:32:35.498950 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e"} err="failed to get container status \"4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e\": rpc error: code = NotFound desc = could not find container \"4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e\": container with ID starting with 4b2b311fc60eafd1e37ede3c881f1b13c1152e9813d24e6a0f919dcfe8a8526e not found: ID does not exist" Dec 06 03:32:37 crc kubenswrapper[4801]: I1206 03:32:37.226591 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" path="/var/lib/kubelet/pods/fab5f952-bc28-4e22-9376-1123b59a34ae/volumes" Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.169856 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.169975 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.170051 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.171271 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.171376 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" gracePeriod=600 Dec 06 03:32:41 crc kubenswrapper[4801]: E1206 03:32:41.296463 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.458828 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" exitCode=0 Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.458914 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28"} Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.459104 4801 scope.go:117] "RemoveContainer" containerID="fa4e1856c226fd52059b0fd49c8e200b1d6679f042be9b39be0d4c3a479e34b9" Dec 06 03:32:41 crc kubenswrapper[4801]: I1206 03:32:41.459648 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:32:41 crc kubenswrapper[4801]: E1206 03:32:41.459953 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:32:43 crc kubenswrapper[4801]: I1206 03:32:43.363110 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 03:32:43 crc kubenswrapper[4801]: I1206 03:32:43.482057 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-wfk4p"] Dec 06 03:32:43 crc kubenswrapper[4801]: I1206 03:32:43.482347 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerName="dnsmasq-dns" containerID="cri-o://50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da" gracePeriod=10 Dec 06 03:32:43 crc kubenswrapper[4801]: I1206 03:32:43.936544 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.009726 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-openstack-edpm-ipam\") pod \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.010341 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-dns-svc\") pod \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.010390 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-nb\") pod \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.010621 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-sb\") pod \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.010666 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fbv6\" (UniqueName: \"kubernetes.io/projected/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-kube-api-access-2fbv6\") pod \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.010702 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-config\") pod \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\" (UID: \"5185e674-2e3e-46f4-b5d9-cb61819cd7cd\") " Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.026889 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-kube-api-access-2fbv6" (OuterVolumeSpecName: "kube-api-access-2fbv6") pod "5185e674-2e3e-46f4-b5d9-cb61819cd7cd" (UID: "5185e674-2e3e-46f4-b5d9-cb61819cd7cd"). InnerVolumeSpecName "kube-api-access-2fbv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.065330 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5185e674-2e3e-46f4-b5d9-cb61819cd7cd" (UID: "5185e674-2e3e-46f4-b5d9-cb61819cd7cd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.088271 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5185e674-2e3e-46f4-b5d9-cb61819cd7cd" (UID: "5185e674-2e3e-46f4-b5d9-cb61819cd7cd"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.087664 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5185e674-2e3e-46f4-b5d9-cb61819cd7cd" (UID: "5185e674-2e3e-46f4-b5d9-cb61819cd7cd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.093869 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-config" (OuterVolumeSpecName: "config") pod "5185e674-2e3e-46f4-b5d9-cb61819cd7cd" (UID: "5185e674-2e3e-46f4-b5d9-cb61819cd7cd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.100168 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5185e674-2e3e-46f4-b5d9-cb61819cd7cd" (UID: "5185e674-2e3e-46f4-b5d9-cb61819cd7cd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.113656 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.113682 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fbv6\" (UniqueName: \"kubernetes.io/projected/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-kube-api-access-2fbv6\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.113693 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-config\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.113703 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.113711 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.113719 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5185e674-2e3e-46f4-b5d9-cb61819cd7cd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.507942 4801 generic.go:334] "Generic (PLEG): container finished" podID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerID="50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da" exitCode=0 Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.507992 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" event={"ID":"5185e674-2e3e-46f4-b5d9-cb61819cd7cd","Type":"ContainerDied","Data":"50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da"} Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.508022 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" event={"ID":"5185e674-2e3e-46f4-b5d9-cb61819cd7cd","Type":"ContainerDied","Data":"d5b7e1f2ca83758e043a5a144892dc9ea11e0d7ab9996848fb4319e31d889ecd"} Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.508039 4801 scope.go:117] "RemoveContainer" containerID="50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.508150 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-wfk4p" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.528779 4801 scope.go:117] "RemoveContainer" containerID="768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.542523 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-wfk4p"] Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.549033 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-wfk4p"] Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.568932 4801 scope.go:117] "RemoveContainer" containerID="50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da" Dec 06 03:32:44 crc kubenswrapper[4801]: E1206 03:32:44.569300 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da\": container with ID starting with 50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da not found: ID does not exist" containerID="50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.569342 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da"} err="failed to get container status \"50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da\": rpc error: code = NotFound desc = could not find container \"50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da\": container with ID starting with 50b88f6cdf6756b8906035e70b8a446788bdfb2e0641801eac21525f553156da not found: ID does not exist" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.569375 4801 scope.go:117] "RemoveContainer" containerID="768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b" Dec 06 03:32:44 crc kubenswrapper[4801]: E1206 03:32:44.569689 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b\": container with ID starting with 768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b not found: ID does not exist" containerID="768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b" Dec 06 03:32:44 crc kubenswrapper[4801]: I1206 03:32:44.569736 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b"} err="failed to get container status \"768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b\": rpc error: code = NotFound desc = could not find container \"768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b\": container with ID starting with 768cf091dcf75a7edc7158cd6a4c2830501a516f1d44bb47afa44d5aefef5b3b not found: ID does not exist" Dec 06 03:32:45 crc kubenswrapper[4801]: I1206 03:32:45.223266 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" path="/var/lib/kubelet/pods/5185e674-2e3e-46f4-b5d9-cb61819cd7cd/volumes" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.674643 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s"] Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.676903 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="extract-utilities" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.677193 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="extract-utilities" Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.677281 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" containerName="dnsmasq-dns" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.677366 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" containerName="dnsmasq-dns" Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.677548 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="registry-server" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.677629 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="registry-server" Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.677738 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerName="init" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.677837 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerName="init" Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.677934 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="extract-content" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.678006 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="extract-content" Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.678100 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerName="dnsmasq-dns" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.678186 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerName="dnsmasq-dns" Dec 06 03:32:53 crc kubenswrapper[4801]: E1206 03:32:53.678272 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" containerName="init" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.678872 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" containerName="init" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.679657 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab5f952-bc28-4e22-9376-1123b59a34ae" containerName="registry-server" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.679775 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ba81752-9263-4679-9908-c8f6eecd163d" containerName="dnsmasq-dns" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.679893 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5185e674-2e3e-46f4-b5d9-cb61819cd7cd" containerName="dnsmasq-dns" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.680791 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.683199 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.683401 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.683956 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.684136 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.690692 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s"] Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.785978 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl658\" (UniqueName: \"kubernetes.io/projected/d80971b5-fe84-4ad9-a5db-75a00e17f031-kube-api-access-fl658\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.786352 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.786395 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.786427 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.887637 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.887705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.887742 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.887901 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl658\" (UniqueName: \"kubernetes.io/projected/d80971b5-fe84-4ad9-a5db-75a00e17f031-kube-api-access-fl658\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.893121 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.893264 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.896281 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:53 crc kubenswrapper[4801]: I1206 03:32:53.903283 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl658\" (UniqueName: \"kubernetes.io/projected/d80971b5-fe84-4ad9-a5db-75a00e17f031-kube-api-access-fl658\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-w796s\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:54 crc kubenswrapper[4801]: I1206 03:32:54.006283 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:32:54 crc kubenswrapper[4801]: I1206 03:32:54.519083 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s"] Dec 06 03:32:54 crc kubenswrapper[4801]: W1206 03:32:54.522541 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd80971b5_fe84_4ad9_a5db_75a00e17f031.slice/crio-bf3e6f4e09a2fc8c5db4654cbbf91914d7803ca6bfa4a2da57aaaa4c6d4957a9 WatchSource:0}: Error finding container bf3e6f4e09a2fc8c5db4654cbbf91914d7803ca6bfa4a2da57aaaa4c6d4957a9: Status 404 returned error can't find the container with id bf3e6f4e09a2fc8c5db4654cbbf91914d7803ca6bfa4a2da57aaaa4c6d4957a9 Dec 06 03:32:54 crc kubenswrapper[4801]: I1206 03:32:54.606235 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" event={"ID":"d80971b5-fe84-4ad9-a5db-75a00e17f031","Type":"ContainerStarted","Data":"bf3e6f4e09a2fc8c5db4654cbbf91914d7803ca6bfa4a2da57aaaa4c6d4957a9"} Dec 06 03:32:57 crc kubenswrapper[4801]: I1206 03:32:57.219830 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:32:57 crc kubenswrapper[4801]: E1206 03:32:57.220583 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:32:57 crc kubenswrapper[4801]: I1206 03:32:57.645346 4801 generic.go:334] "Generic (PLEG): container finished" podID="509f393d-bb6a-47e3-a68c-e598c5b37a1b" containerID="42189448601c6d27e3c37694d88d0644f760f18b3e5e07311e8cec90dab53a18" exitCode=0 Dec 06 03:32:57 crc kubenswrapper[4801]: I1206 03:32:57.645406 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"509f393d-bb6a-47e3-a68c-e598c5b37a1b","Type":"ContainerDied","Data":"42189448601c6d27e3c37694d88d0644f760f18b3e5e07311e8cec90dab53a18"} Dec 06 03:33:04 crc kubenswrapper[4801]: I1206 03:33:04.710990 4801 generic.go:334] "Generic (PLEG): container finished" podID="96374aa1-9e52-440e-b058-26ed49f7b0e9" containerID="9e37841f5b1a213993cc1cad518567d70f3b693e7edd8610347091c5309c148a" exitCode=0 Dec 06 03:33:04 crc kubenswrapper[4801]: I1206 03:33:04.711087 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96374aa1-9e52-440e-b058-26ed49f7b0e9","Type":"ContainerDied","Data":"9e37841f5b1a213993cc1cad518567d70f3b693e7edd8610347091c5309c148a"} Dec 06 03:33:08 crc kubenswrapper[4801]: E1206 03:33:08.160930 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Dec 06 03:33:08 crc kubenswrapper[4801]: E1206 03:33:08.161713 4801 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 03:33:08 crc kubenswrapper[4801]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Dec 06 03:33:08 crc kubenswrapper[4801]: - hosts: all Dec 06 03:33:08 crc kubenswrapper[4801]: strategy: linear Dec 06 03:33:08 crc kubenswrapper[4801]: tasks: Dec 06 03:33:08 crc kubenswrapper[4801]: - name: Enable podified-repos Dec 06 03:33:08 crc kubenswrapper[4801]: become: true Dec 06 03:33:08 crc kubenswrapper[4801]: ansible.builtin.shell: | Dec 06 03:33:08 crc kubenswrapper[4801]: set -euxo pipefail Dec 06 03:33:08 crc kubenswrapper[4801]: pushd /var/tmp Dec 06 03:33:08 crc kubenswrapper[4801]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Dec 06 03:33:08 crc kubenswrapper[4801]: pushd repo-setup-main Dec 06 03:33:08 crc kubenswrapper[4801]: python3 -m venv ./venv Dec 06 03:33:08 crc kubenswrapper[4801]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Dec 06 03:33:08 crc kubenswrapper[4801]: ./venv/bin/repo-setup current-podified -b antelope Dec 06 03:33:08 crc kubenswrapper[4801]: popd Dec 06 03:33:08 crc kubenswrapper[4801]: rm -rf repo-setup-main Dec 06 03:33:08 crc kubenswrapper[4801]: Dec 06 03:33:08 crc kubenswrapper[4801]: Dec 06 03:33:08 crc kubenswrapper[4801]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Dec 06 03:33:08 crc kubenswrapper[4801]: edpm_override_hosts: openstack-edpm-ipam Dec 06 03:33:08 crc kubenswrapper[4801]: edpm_service_type: repo-setup Dec 06 03:33:08 crc kubenswrapper[4801]: Dec 06 03:33:08 crc kubenswrapper[4801]: Dec 06 03:33:08 crc kubenswrapper[4801]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fl658,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-w796s_openstack(d80971b5-fe84-4ad9-a5db-75a00e17f031): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Dec 06 03:33:08 crc kubenswrapper[4801]: > logger="UnhandledError" Dec 06 03:33:08 crc kubenswrapper[4801]: E1206 03:33:08.163528 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" podUID="d80971b5-fe84-4ad9-a5db-75a00e17f031" Dec 06 03:33:08 crc kubenswrapper[4801]: I1206 03:33:08.756399 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"509f393d-bb6a-47e3-a68c-e598c5b37a1b","Type":"ContainerStarted","Data":"b594d539114cfe5477fcb3d9b51afab86e00e33900568245d16329bc503f3aba"} Dec 06 03:33:08 crc kubenswrapper[4801]: I1206 03:33:08.757407 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:33:08 crc kubenswrapper[4801]: I1206 03:33:08.763902 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"96374aa1-9e52-440e-b058-26ed49f7b0e9","Type":"ContainerStarted","Data":"5abad02e367849220bb4fcdeef49af499e909b412bf6e8873abad85328d0b92c"} Dec 06 03:33:08 crc kubenswrapper[4801]: I1206 03:33:08.764348 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 03:33:08 crc kubenswrapper[4801]: E1206 03:33:08.768293 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" podUID="d80971b5-fe84-4ad9-a5db-75a00e17f031" Dec 06 03:33:08 crc kubenswrapper[4801]: I1206 03:33:08.788126 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.788109668 podStartE2EDuration="46.788109668s" podCreationTimestamp="2025-12-06 03:32:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:33:08.783448882 +0000 UTC m=+1641.906056464" watchObservedRunningTime="2025-12-06 03:33:08.788109668 +0000 UTC m=+1641.910717250" Dec 06 03:33:08 crc kubenswrapper[4801]: I1206 03:33:08.835446 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.835426639 podStartE2EDuration="52.835426639s" podCreationTimestamp="2025-12-06 03:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 03:33:08.814327312 +0000 UTC m=+1641.936934914" watchObservedRunningTime="2025-12-06 03:33:08.835426639 +0000 UTC m=+1641.958034221" Dec 06 03:33:10 crc kubenswrapper[4801]: I1206 03:33:10.212719 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:33:10 crc kubenswrapper[4801]: E1206 03:33:10.213291 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:33:18 crc kubenswrapper[4801]: I1206 03:33:18.425834 4801 scope.go:117] "RemoveContainer" containerID="3a1bdd757a2dda20fb9152229e4308f7c32211ff4abdcf77fe5f28784d7d80c5" Dec 06 03:33:18 crc kubenswrapper[4801]: I1206 03:33:18.458372 4801 scope.go:117] "RemoveContainer" containerID="668456f1164b915b3e42353b758c36cd1cb7387c65f5316882577c7fd8740195" Dec 06 03:33:21 crc kubenswrapper[4801]: I1206 03:33:21.878054 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" event={"ID":"d80971b5-fe84-4ad9-a5db-75a00e17f031","Type":"ContainerStarted","Data":"fa3f2cc10a3718a2070c60bb1f86031dab77f0087958bf7f8603ba75e0deabc4"} Dec 06 03:33:21 crc kubenswrapper[4801]: I1206 03:33:21.897844 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" podStartSLOduration=2.8037490480000002 podStartE2EDuration="28.897828438s" podCreationTimestamp="2025-12-06 03:32:53 +0000 UTC" firstStartedPulling="2025-12-06 03:32:54.525060272 +0000 UTC m=+1627.647667844" lastFinishedPulling="2025-12-06 03:33:20.619139662 +0000 UTC m=+1653.741747234" observedRunningTime="2025-12-06 03:33:21.895068014 +0000 UTC m=+1655.017675586" watchObservedRunningTime="2025-12-06 03:33:21.897828438 +0000 UTC m=+1655.020436010" Dec 06 03:33:22 crc kubenswrapper[4801]: I1206 03:33:22.942168 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 03:33:24 crc kubenswrapper[4801]: I1206 03:33:24.212180 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:33:24 crc kubenswrapper[4801]: E1206 03:33:24.212558 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:33:26 crc kubenswrapper[4801]: I1206 03:33:26.870035 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 03:33:32 crc kubenswrapper[4801]: I1206 03:33:32.011338 4801 generic.go:334] "Generic (PLEG): container finished" podID="d80971b5-fe84-4ad9-a5db-75a00e17f031" containerID="fa3f2cc10a3718a2070c60bb1f86031dab77f0087958bf7f8603ba75e0deabc4" exitCode=0 Dec 06 03:33:32 crc kubenswrapper[4801]: I1206 03:33:32.011485 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" event={"ID":"d80971b5-fe84-4ad9-a5db-75a00e17f031","Type":"ContainerDied","Data":"fa3f2cc10a3718a2070c60bb1f86031dab77f0087958bf7f8603ba75e0deabc4"} Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.422142 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.540614 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-ssh-key\") pod \"d80971b5-fe84-4ad9-a5db-75a00e17f031\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.540734 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-inventory\") pod \"d80971b5-fe84-4ad9-a5db-75a00e17f031\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.540843 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-repo-setup-combined-ca-bundle\") pod \"d80971b5-fe84-4ad9-a5db-75a00e17f031\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.540867 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl658\" (UniqueName: \"kubernetes.io/projected/d80971b5-fe84-4ad9-a5db-75a00e17f031-kube-api-access-fl658\") pod \"d80971b5-fe84-4ad9-a5db-75a00e17f031\" (UID: \"d80971b5-fe84-4ad9-a5db-75a00e17f031\") " Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.546941 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d80971b5-fe84-4ad9-a5db-75a00e17f031-kube-api-access-fl658" (OuterVolumeSpecName: "kube-api-access-fl658") pod "d80971b5-fe84-4ad9-a5db-75a00e17f031" (UID: "d80971b5-fe84-4ad9-a5db-75a00e17f031"). InnerVolumeSpecName "kube-api-access-fl658". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.547372 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d80971b5-fe84-4ad9-a5db-75a00e17f031" (UID: "d80971b5-fe84-4ad9-a5db-75a00e17f031"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.571984 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d80971b5-fe84-4ad9-a5db-75a00e17f031" (UID: "d80971b5-fe84-4ad9-a5db-75a00e17f031"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.576694 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-inventory" (OuterVolumeSpecName: "inventory") pod "d80971b5-fe84-4ad9-a5db-75a00e17f031" (UID: "d80971b5-fe84-4ad9-a5db-75a00e17f031"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.644400 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.644450 4801 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.644467 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl658\" (UniqueName: \"kubernetes.io/projected/d80971b5-fe84-4ad9-a5db-75a00e17f031-kube-api-access-fl658\") on node \"crc\" DevicePath \"\"" Dec 06 03:33:33 crc kubenswrapper[4801]: I1206 03:33:33.644483 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d80971b5-fe84-4ad9-a5db-75a00e17f031-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.029003 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" event={"ID":"d80971b5-fe84-4ad9-a5db-75a00e17f031","Type":"ContainerDied","Data":"bf3e6f4e09a2fc8c5db4654cbbf91914d7803ca6bfa4a2da57aaaa4c6d4957a9"} Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.029273 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3e6f4e09a2fc8c5db4654cbbf91914d7803ca6bfa4a2da57aaaa4c6d4957a9" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.029194 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.107997 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8"] Dec 06 03:33:34 crc kubenswrapper[4801]: E1206 03:33:34.108460 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d80971b5-fe84-4ad9-a5db-75a00e17f031" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.108484 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d80971b5-fe84-4ad9-a5db-75a00e17f031" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.108679 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d80971b5-fe84-4ad9-a5db-75a00e17f031" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.109375 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.114879 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.115094 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.115218 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.115315 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.125789 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8"] Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.253979 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.254153 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.254212 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.254242 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvp6k\" (UniqueName: \"kubernetes.io/projected/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-kube-api-access-xvp6k\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.355781 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.355883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.355915 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvp6k\" (UniqueName: \"kubernetes.io/projected/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-kube-api-access-xvp6k\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.356000 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.359741 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.359933 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.360451 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.373360 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvp6k\" (UniqueName: \"kubernetes.io/projected/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-kube-api-access-xvp6k\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:34 crc kubenswrapper[4801]: I1206 03:33:34.425778 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:33:35 crc kubenswrapper[4801]: I1206 03:33:35.079518 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8"] Dec 06 03:33:35 crc kubenswrapper[4801]: I1206 03:33:35.080192 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:33:36 crc kubenswrapper[4801]: I1206 03:33:36.048906 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" event={"ID":"f9cbfc7a-8123-4bf7-bebc-fb50674cf566","Type":"ContainerStarted","Data":"c97bc66664cc4b0c33084dfc8c91068c2a486663a133764597c58b04c80d4b75"} Dec 06 03:33:36 crc kubenswrapper[4801]: I1206 03:33:36.049477 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" event={"ID":"f9cbfc7a-8123-4bf7-bebc-fb50674cf566","Type":"ContainerStarted","Data":"69dc0115341d1e32529233fda1d58e9ceaf74504b57fa44baffd7a9e973fc619"} Dec 06 03:33:36 crc kubenswrapper[4801]: I1206 03:33:36.070016 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" podStartSLOduration=1.6255817590000001 podStartE2EDuration="2.069988722s" podCreationTimestamp="2025-12-06 03:33:34 +0000 UTC" firstStartedPulling="2025-12-06 03:33:35.079994856 +0000 UTC m=+1668.202602428" lastFinishedPulling="2025-12-06 03:33:35.524401819 +0000 UTC m=+1668.647009391" observedRunningTime="2025-12-06 03:33:36.065519191 +0000 UTC m=+1669.188126763" watchObservedRunningTime="2025-12-06 03:33:36.069988722 +0000 UTC m=+1669.192596304" Dec 06 03:33:38 crc kubenswrapper[4801]: I1206 03:33:38.212858 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:33:38 crc kubenswrapper[4801]: E1206 03:33:38.213410 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:33:52 crc kubenswrapper[4801]: I1206 03:33:52.213093 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:33:52 crc kubenswrapper[4801]: E1206 03:33:52.214181 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:34:07 crc kubenswrapper[4801]: I1206 03:34:07.220437 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:34:07 crc kubenswrapper[4801]: E1206 03:34:07.221349 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:34:18 crc kubenswrapper[4801]: I1206 03:34:18.592470 4801 scope.go:117] "RemoveContainer" containerID="281b72f0abffde05cad5b84de79336d1666f87697c96f682c162076ab3b68e2c" Dec 06 03:34:19 crc kubenswrapper[4801]: I1206 03:34:19.213233 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:34:19 crc kubenswrapper[4801]: E1206 03:34:19.213550 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:34:30 crc kubenswrapper[4801]: I1206 03:34:30.214120 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:34:30 crc kubenswrapper[4801]: E1206 03:34:30.216696 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:34:41 crc kubenswrapper[4801]: I1206 03:34:41.212470 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:34:41 crc kubenswrapper[4801]: E1206 03:34:41.213504 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:34:54 crc kubenswrapper[4801]: I1206 03:34:54.212805 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:34:54 crc kubenswrapper[4801]: E1206 03:34:54.213632 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:35:07 crc kubenswrapper[4801]: I1206 03:35:07.219561 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:35:07 crc kubenswrapper[4801]: E1206 03:35:07.220378 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:35:22 crc kubenswrapper[4801]: I1206 03:35:22.213101 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:35:22 crc kubenswrapper[4801]: E1206 03:35:22.214335 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:35:36 crc kubenswrapper[4801]: I1206 03:35:36.213175 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:35:36 crc kubenswrapper[4801]: E1206 03:35:36.217257 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:35:48 crc kubenswrapper[4801]: I1206 03:35:48.213145 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:35:48 crc kubenswrapper[4801]: E1206 03:35:48.213784 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:36:01 crc kubenswrapper[4801]: I1206 03:36:01.213162 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:36:01 crc kubenswrapper[4801]: E1206 03:36:01.214143 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.040241 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6f1a-account-create-update-tgmnk"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.051747 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-89sv7"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.059574 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-147f-account-create-update-dg2m9"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.072064 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6f1a-account-create-update-tgmnk"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.083131 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-089f-account-create-update-zx4mt"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.093891 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-ppn95"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.097460 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-27wtd"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.103922 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-89sv7"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.110547 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-147f-account-create-update-dg2m9"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.117437 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-089f-account-create-update-zx4mt"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.125005 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-27wtd"] Dec 06 03:36:12 crc kubenswrapper[4801]: I1206 03:36:12.135183 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-ppn95"] Dec 06 03:36:13 crc kubenswrapper[4801]: I1206 03:36:13.223184 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b1d8fa-2a96-47ac-aaef-bf2a09f00373" path="/var/lib/kubelet/pods/21b1d8fa-2a96-47ac-aaef-bf2a09f00373/volumes" Dec 06 03:36:13 crc kubenswrapper[4801]: I1206 03:36:13.225915 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587add7b-0716-4ce8-b807-4fb5415f85aa" path="/var/lib/kubelet/pods/587add7b-0716-4ce8-b807-4fb5415f85aa/volumes" Dec 06 03:36:13 crc kubenswrapper[4801]: I1206 03:36:13.226822 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f56097-f68c-48fe-b455-33f6119871c5" path="/var/lib/kubelet/pods/77f56097-f68c-48fe-b455-33f6119871c5/volumes" Dec 06 03:36:13 crc kubenswrapper[4801]: I1206 03:36:13.227626 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f602-124c-4b96-854a-ff8e9583ec6b" path="/var/lib/kubelet/pods/b605f602-124c-4b96-854a-ff8e9583ec6b/volumes" Dec 06 03:36:13 crc kubenswrapper[4801]: I1206 03:36:13.228394 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c4a0a8-727e-4e11-891e-c635069d7a91" path="/var/lib/kubelet/pods/d7c4a0a8-727e-4e11-891e-c635069d7a91/volumes" Dec 06 03:36:13 crc kubenswrapper[4801]: I1206 03:36:13.229181 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f618d289-2120-4097-80bd-0cff83800ff8" path="/var/lib/kubelet/pods/f618d289-2120-4097-80bd-0cff83800ff8/volumes" Dec 06 03:36:14 crc kubenswrapper[4801]: I1206 03:36:14.212633 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:36:14 crc kubenswrapper[4801]: E1206 03:36:14.212918 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.031841 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-1d79-account-create-update-jx9mp"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.052116 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7af4-account-create-update-s5mjk"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.062694 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-1d79-account-create-update-jx9mp"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.077657 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-psmjd"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.086975 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7af4-account-create-update-s5mjk"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.096981 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9af5-account-create-update-45pgz"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.107361 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-psmjd"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.116666 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9af5-account-create-update-45pgz"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.129552 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-m6dhf"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.138801 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fsgvd"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.148142 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fsgvd"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.155358 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-m6dhf"] Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.223224 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a61607b-902e-4973-b703-ed7eb2b6939a" path="/var/lib/kubelet/pods/1a61607b-902e-4973-b703-ed7eb2b6939a/volumes" Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.224057 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49252935-dea5-4610-9dec-31761dd3973a" path="/var/lib/kubelet/pods/49252935-dea5-4610-9dec-31761dd3973a/volumes" Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.224601 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6437a4fc-969d-48ef-bc59-8115463e22b4" path="/var/lib/kubelet/pods/6437a4fc-969d-48ef-bc59-8115463e22b4/volumes" Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.225174 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a79cca91-19cc-486a-82ad-698b4a88e673" path="/var/lib/kubelet/pods/a79cca91-19cc-486a-82ad-698b4a88e673/volumes" Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.226330 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac6e360-d06f-4966-ad94-07325a1c4d0f" path="/var/lib/kubelet/pods/dac6e360-d06f-4966-ad94-07325a1c4d0f/volumes" Dec 06 03:36:17 crc kubenswrapper[4801]: I1206 03:36:17.226934 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ae239b-78c8-4b43-aa09-6ffb5b6deeea" path="/var/lib/kubelet/pods/e0ae239b-78c8-4b43-aa09-6ffb5b6deeea/volumes" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.672158 4801 scope.go:117] "RemoveContainer" containerID="ceedeaecf815a5549737afe75eb0009ff51352b60875da6468e5019752de45ac" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.698151 4801 scope.go:117] "RemoveContainer" containerID="65bf14665a73a7cbe3078a04fa23c750fd57a89a3bf2fe77174dfaa376b57dac" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.739352 4801 scope.go:117] "RemoveContainer" containerID="989c172c25b01dfcd500e5add5b83e1bc6de0ee008e772fad6e7b3f6b50dc52b" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.780473 4801 scope.go:117] "RemoveContainer" containerID="279b807e025a63fa76e0db0e0ce21c798fdaef2e865a1953ad2b1f8fe5b89875" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.823478 4801 scope.go:117] "RemoveContainer" containerID="0138be3dfb8cae6fb2cba0418d9d67a13727cb1b0ff08ac6f59245683c91ecd9" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.866745 4801 scope.go:117] "RemoveContainer" containerID="cd8814ad447eb8c1a7a022a4ee26232df52ebd76e1104b9ed8fd8c23e409a730" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.904503 4801 scope.go:117] "RemoveContainer" containerID="e09c1cc3c628a8796457ec2b65b9565b1725c8a4bcc58380a9c17578cfe9d0f5" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.923162 4801 scope.go:117] "RemoveContainer" containerID="75a4436168f6cc66bdd3be34116cfe66a10748e9f2cf143fb3a725afd6c45260" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.942941 4801 scope.go:117] "RemoveContainer" containerID="16393c50fdfe02859d7012c4aed6a48eff1d9317d41c8e0273642e5abd8b1b0d" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.961530 4801 scope.go:117] "RemoveContainer" containerID="c31fb23b0fd48026123060beeaf8ab73e82f006b0848957a5aa591b71dc2ac5c" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.980278 4801 scope.go:117] "RemoveContainer" containerID="a0ad87272fb18a313441f48899db111047b5274e47b4de376583c4747e33243a" Dec 06 03:36:18 crc kubenswrapper[4801]: I1206 03:36:18.999726 4801 scope.go:117] "RemoveContainer" containerID="cff433d1c1ea532d9d1e130aa5032185a6ffd75b29985a1ce97bef06140db53e" Dec 06 03:36:29 crc kubenswrapper[4801]: I1206 03:36:29.212978 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:36:29 crc kubenswrapper[4801]: E1206 03:36:29.213851 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:36:33 crc kubenswrapper[4801]: I1206 03:36:33.049948 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-p4wzv"] Dec 06 03:36:33 crc kubenswrapper[4801]: I1206 03:36:33.057092 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-p4wzv"] Dec 06 03:36:33 crc kubenswrapper[4801]: I1206 03:36:33.228570 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca85741f-399a-4587-9f14-b972c56193e9" path="/var/lib/kubelet/pods/ca85741f-399a-4587-9f14-b972c56193e9/volumes" Dec 06 03:36:41 crc kubenswrapper[4801]: I1206 03:36:41.213130 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:36:41 crc kubenswrapper[4801]: E1206 03:36:41.213987 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:36:53 crc kubenswrapper[4801]: I1206 03:36:53.198311 4801 generic.go:334] "Generic (PLEG): container finished" podID="f9cbfc7a-8123-4bf7-bebc-fb50674cf566" containerID="c97bc66664cc4b0c33084dfc8c91068c2a486663a133764597c58b04c80d4b75" exitCode=0 Dec 06 03:36:53 crc kubenswrapper[4801]: I1206 03:36:53.198402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" event={"ID":"f9cbfc7a-8123-4bf7-bebc-fb50674cf566","Type":"ContainerDied","Data":"c97bc66664cc4b0c33084dfc8c91068c2a486663a133764597c58b04c80d4b75"} Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.609934 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.715715 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-inventory\") pod \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.715862 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-ssh-key\") pod \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.715903 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-bootstrap-combined-ca-bundle\") pod \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.716009 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvp6k\" (UniqueName: \"kubernetes.io/projected/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-kube-api-access-xvp6k\") pod \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\" (UID: \"f9cbfc7a-8123-4bf7-bebc-fb50674cf566\") " Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.721874 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-kube-api-access-xvp6k" (OuterVolumeSpecName: "kube-api-access-xvp6k") pod "f9cbfc7a-8123-4bf7-bebc-fb50674cf566" (UID: "f9cbfc7a-8123-4bf7-bebc-fb50674cf566"). InnerVolumeSpecName "kube-api-access-xvp6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.721963 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f9cbfc7a-8123-4bf7-bebc-fb50674cf566" (UID: "f9cbfc7a-8123-4bf7-bebc-fb50674cf566"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.742836 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-inventory" (OuterVolumeSpecName: "inventory") pod "f9cbfc7a-8123-4bf7-bebc-fb50674cf566" (UID: "f9cbfc7a-8123-4bf7-bebc-fb50674cf566"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.744243 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f9cbfc7a-8123-4bf7-bebc-fb50674cf566" (UID: "f9cbfc7a-8123-4bf7-bebc-fb50674cf566"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.818945 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvp6k\" (UniqueName: \"kubernetes.io/projected/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-kube-api-access-xvp6k\") on node \"crc\" DevicePath \"\"" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.818999 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.819016 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:36:54 crc kubenswrapper[4801]: I1206 03:36:54.819029 4801 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9cbfc7a-8123-4bf7-bebc-fb50674cf566-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.216571 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.228984 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8" event={"ID":"f9cbfc7a-8123-4bf7-bebc-fb50674cf566","Type":"ContainerDied","Data":"69dc0115341d1e32529233fda1d58e9ceaf74504b57fa44baffd7a9e973fc619"} Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.229039 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69dc0115341d1e32529233fda1d58e9ceaf74504b57fa44baffd7a9e973fc619" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.311135 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb"] Dec 06 03:36:55 crc kubenswrapper[4801]: E1206 03:36:55.311595 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cbfc7a-8123-4bf7-bebc-fb50674cf566" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.311618 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cbfc7a-8123-4bf7-bebc-fb50674cf566" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.311848 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cbfc7a-8123-4bf7-bebc-fb50674cf566" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.312436 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.315092 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.315275 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.315383 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.320471 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.321258 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb"] Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.433616 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.433825 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmwnx\" (UniqueName: \"kubernetes.io/projected/9bff6d99-5c8c-421c-9f06-e24e58c59492-kube-api-access-dmwnx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.433902 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.537575 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmwnx\" (UniqueName: \"kubernetes.io/projected/9bff6d99-5c8c-421c-9f06-e24e58c59492-kube-api-access-dmwnx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.537752 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.538119 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.541967 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.553477 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.565095 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmwnx\" (UniqueName: \"kubernetes.io/projected/9bff6d99-5c8c-421c-9f06-e24e58c59492-kube-api-access-dmwnx\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:55 crc kubenswrapper[4801]: I1206 03:36:55.631366 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:36:56 crc kubenswrapper[4801]: I1206 03:36:56.159926 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb"] Dec 06 03:36:56 crc kubenswrapper[4801]: I1206 03:36:56.213128 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:36:56 crc kubenswrapper[4801]: E1206 03:36:56.213652 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:36:56 crc kubenswrapper[4801]: I1206 03:36:56.228331 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" event={"ID":"9bff6d99-5c8c-421c-9f06-e24e58c59492","Type":"ContainerStarted","Data":"a5734aa619e8bf1961a7ec72421379bb5107b65cc15a6d0a045196fbfd59b017"} Dec 06 03:36:57 crc kubenswrapper[4801]: I1206 03:36:57.247077 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" event={"ID":"9bff6d99-5c8c-421c-9f06-e24e58c59492","Type":"ContainerStarted","Data":"c93c85120d18997bdcbaa8ba9d621a6c18adc1af51101d292fbf2f2641dcdc66"} Dec 06 03:36:57 crc kubenswrapper[4801]: I1206 03:36:57.275199 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" podStartSLOduration=1.763879149 podStartE2EDuration="2.275173863s" podCreationTimestamp="2025-12-06 03:36:55 +0000 UTC" firstStartedPulling="2025-12-06 03:36:56.161460354 +0000 UTC m=+1869.284067936" lastFinishedPulling="2025-12-06 03:36:56.672755078 +0000 UTC m=+1869.795362650" observedRunningTime="2025-12-06 03:36:57.264484884 +0000 UTC m=+1870.387092456" watchObservedRunningTime="2025-12-06 03:36:57.275173863 +0000 UTC m=+1870.397781435" Dec 06 03:37:07 crc kubenswrapper[4801]: I1206 03:37:07.217025 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:37:07 crc kubenswrapper[4801]: E1206 03:37:07.217692 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:37:19 crc kubenswrapper[4801]: I1206 03:37:19.211747 4801 scope.go:117] "RemoveContainer" containerID="62970a80dea1ea2fdd943513381e359a1789f4745e68d11c45dad0c338e5c77b" Dec 06 03:37:22 crc kubenswrapper[4801]: I1206 03:37:22.212375 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:37:22 crc kubenswrapper[4801]: E1206 03:37:22.213170 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:37:37 crc kubenswrapper[4801]: I1206 03:37:37.219944 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:37:37 crc kubenswrapper[4801]: E1206 03:37:37.220858 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:37:44 crc kubenswrapper[4801]: I1206 03:37:44.068458 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qxt5m"] Dec 06 03:37:44 crc kubenswrapper[4801]: I1206 03:37:44.078008 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qxt5m"] Dec 06 03:37:45 crc kubenswrapper[4801]: I1206 03:37:45.030981 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jbzbp"] Dec 06 03:37:45 crc kubenswrapper[4801]: I1206 03:37:45.041914 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jbzbp"] Dec 06 03:37:45 crc kubenswrapper[4801]: I1206 03:37:45.230874 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5b3256-9ed6-45e3-acec-a0b14d607802" path="/var/lib/kubelet/pods/7f5b3256-9ed6-45e3-acec-a0b14d607802/volumes" Dec 06 03:37:45 crc kubenswrapper[4801]: I1206 03:37:45.232620 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eed8b95a-e314-4ab9-91f4-06df2649e614" path="/var/lib/kubelet/pods/eed8b95a-e314-4ab9-91f4-06df2649e614/volumes" Dec 06 03:37:46 crc kubenswrapper[4801]: I1206 03:37:46.034321 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tp4d2"] Dec 06 03:37:46 crc kubenswrapper[4801]: I1206 03:37:46.043055 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tp4d2"] Dec 06 03:37:46 crc kubenswrapper[4801]: I1206 03:37:46.052226 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-qwr7p"] Dec 06 03:37:46 crc kubenswrapper[4801]: I1206 03:37:46.060414 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-qwr7p"] Dec 06 03:37:47 crc kubenswrapper[4801]: I1206 03:37:47.241038 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3842042e-a4c9-4f33-bda5-b11f58a69519" path="/var/lib/kubelet/pods/3842042e-a4c9-4f33-bda5-b11f58a69519/volumes" Dec 06 03:37:47 crc kubenswrapper[4801]: I1206 03:37:47.242475 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="861fdd2b-c39c-4122-94a2-8eb5744c1536" path="/var/lib/kubelet/pods/861fdd2b-c39c-4122-94a2-8eb5744c1536/volumes" Dec 06 03:37:51 crc kubenswrapper[4801]: I1206 03:37:51.212568 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:37:51 crc kubenswrapper[4801]: I1206 03:37:51.767430 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"7e1db6ee027248e2e975e23d49437335bf9e87f64d09bd3a4e738b868ed41a8b"} Dec 06 03:38:10 crc kubenswrapper[4801]: I1206 03:38:10.950191 4801 generic.go:334] "Generic (PLEG): container finished" podID="9bff6d99-5c8c-421c-9f06-e24e58c59492" containerID="c93c85120d18997bdcbaa8ba9d621a6c18adc1af51101d292fbf2f2641dcdc66" exitCode=0 Dec 06 03:38:10 crc kubenswrapper[4801]: I1206 03:38:10.950276 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" event={"ID":"9bff6d99-5c8c-421c-9f06-e24e58c59492","Type":"ContainerDied","Data":"c93c85120d18997bdcbaa8ba9d621a6c18adc1af51101d292fbf2f2641dcdc66"} Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.375700 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.383304 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-ssh-key\") pod \"9bff6d99-5c8c-421c-9f06-e24e58c59492\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.383388 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-inventory\") pod \"9bff6d99-5c8c-421c-9f06-e24e58c59492\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.383471 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmwnx\" (UniqueName: \"kubernetes.io/projected/9bff6d99-5c8c-421c-9f06-e24e58c59492-kube-api-access-dmwnx\") pod \"9bff6d99-5c8c-421c-9f06-e24e58c59492\" (UID: \"9bff6d99-5c8c-421c-9f06-e24e58c59492\") " Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.397466 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bff6d99-5c8c-421c-9f06-e24e58c59492-kube-api-access-dmwnx" (OuterVolumeSpecName: "kube-api-access-dmwnx") pod "9bff6d99-5c8c-421c-9f06-e24e58c59492" (UID: "9bff6d99-5c8c-421c-9f06-e24e58c59492"). InnerVolumeSpecName "kube-api-access-dmwnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.418582 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-inventory" (OuterVolumeSpecName: "inventory") pod "9bff6d99-5c8c-421c-9f06-e24e58c59492" (UID: "9bff6d99-5c8c-421c-9f06-e24e58c59492"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.430680 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9bff6d99-5c8c-421c-9f06-e24e58c59492" (UID: "9bff6d99-5c8c-421c-9f06-e24e58c59492"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.485172 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmwnx\" (UniqueName: \"kubernetes.io/projected/9bff6d99-5c8c-421c-9f06-e24e58c59492-kube-api-access-dmwnx\") on node \"crc\" DevicePath \"\"" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.485201 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.485209 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9bff6d99-5c8c-421c-9f06-e24e58c59492-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.978168 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" event={"ID":"9bff6d99-5c8c-421c-9f06-e24e58c59492","Type":"ContainerDied","Data":"a5734aa619e8bf1961a7ec72421379bb5107b65cc15a6d0a045196fbfd59b017"} Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.978261 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5734aa619e8bf1961a7ec72421379bb5107b65cc15a6d0a045196fbfd59b017" Dec 06 03:38:12 crc kubenswrapper[4801]: I1206 03:38:12.978314 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.074299 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr"] Dec 06 03:38:13 crc kubenswrapper[4801]: E1206 03:38:13.074732 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bff6d99-5c8c-421c-9f06-e24e58c59492" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.074778 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bff6d99-5c8c-421c-9f06-e24e58c59492" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.074995 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bff6d99-5c8c-421c-9f06-e24e58c59492" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.075647 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.078069 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.078916 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.079621 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.080000 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.090896 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr"] Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.198820 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.198977 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54249\" (UniqueName: \"kubernetes.io/projected/c4cd69b3-737d-4293-bbe0-426a284b5c3b-kube-api-access-54249\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.199050 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.301182 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54249\" (UniqueName: \"kubernetes.io/projected/c4cd69b3-737d-4293-bbe0-426a284b5c3b-kube-api-access-54249\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.301526 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.301700 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.306466 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.307074 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.320479 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54249\" (UniqueName: \"kubernetes.io/projected/c4cd69b3-737d-4293-bbe0-426a284b5c3b-kube-api-access-54249\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.408131 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.948042 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr"] Dec 06 03:38:13 crc kubenswrapper[4801]: I1206 03:38:13.988837 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" event={"ID":"c4cd69b3-737d-4293-bbe0-426a284b5c3b","Type":"ContainerStarted","Data":"5a06a997194e4d3c9f1726e73898d09884e1fdbc095c1f91d87f844422cc8248"} Dec 06 03:38:16 crc kubenswrapper[4801]: I1206 03:38:16.008692 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" event={"ID":"c4cd69b3-737d-4293-bbe0-426a284b5c3b","Type":"ContainerStarted","Data":"b7135eb9bc3c21279a8b453c072fe93d13e0aa3ea7d6718699406d71a97fca79"} Dec 06 03:38:19 crc kubenswrapper[4801]: I1206 03:38:19.305428 4801 scope.go:117] "RemoveContainer" containerID="9b1c57f8e8d7af3e0cf5be0fa3d46c0f986bf9d98ac1ff012c7fb21aaa08899b" Dec 06 03:38:19 crc kubenswrapper[4801]: I1206 03:38:19.359672 4801 scope.go:117] "RemoveContainer" containerID="0ba08bc914c6ef000e31bb724672892b9bdc1aabc93e340f4fa7166a7b5bef1b" Dec 06 03:38:19 crc kubenswrapper[4801]: I1206 03:38:19.384972 4801 scope.go:117] "RemoveContainer" containerID="096ceb38bfb155296281785c717f33b9461f7bd30de59f0b078cb14d3fac60d6" Dec 06 03:38:19 crc kubenswrapper[4801]: I1206 03:38:19.441461 4801 scope.go:117] "RemoveContainer" containerID="458c35b58d2d00b22609a0f92344c2fb33e07af8a6ce87cce3ec13e2d2d2bb76" Dec 06 03:38:21 crc kubenswrapper[4801]: I1206 03:38:21.060507 4801 generic.go:334] "Generic (PLEG): container finished" podID="c4cd69b3-737d-4293-bbe0-426a284b5c3b" containerID="b7135eb9bc3c21279a8b453c072fe93d13e0aa3ea7d6718699406d71a97fca79" exitCode=0 Dec 06 03:38:21 crc kubenswrapper[4801]: I1206 03:38:21.060597 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" event={"ID":"c4cd69b3-737d-4293-bbe0-426a284b5c3b","Type":"ContainerDied","Data":"b7135eb9bc3c21279a8b453c072fe93d13e0aa3ea7d6718699406d71a97fca79"} Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.406519 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.515954 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-ssh-key\") pod \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.516164 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54249\" (UniqueName: \"kubernetes.io/projected/c4cd69b3-737d-4293-bbe0-426a284b5c3b-kube-api-access-54249\") pod \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.516198 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-inventory\") pod \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\" (UID: \"c4cd69b3-737d-4293-bbe0-426a284b5c3b\") " Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.522309 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4cd69b3-737d-4293-bbe0-426a284b5c3b-kube-api-access-54249" (OuterVolumeSpecName: "kube-api-access-54249") pod "c4cd69b3-737d-4293-bbe0-426a284b5c3b" (UID: "c4cd69b3-737d-4293-bbe0-426a284b5c3b"). InnerVolumeSpecName "kube-api-access-54249". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.548722 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4cd69b3-737d-4293-bbe0-426a284b5c3b" (UID: "c4cd69b3-737d-4293-bbe0-426a284b5c3b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.564049 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-inventory" (OuterVolumeSpecName: "inventory") pod "c4cd69b3-737d-4293-bbe0-426a284b5c3b" (UID: "c4cd69b3-737d-4293-bbe0-426a284b5c3b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.617978 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.618022 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54249\" (UniqueName: \"kubernetes.io/projected/c4cd69b3-737d-4293-bbe0-426a284b5c3b-kube-api-access-54249\") on node \"crc\" DevicePath \"\"" Dec 06 03:38:22 crc kubenswrapper[4801]: I1206 03:38:22.618038 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4cd69b3-737d-4293-bbe0-426a284b5c3b-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.078098 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" event={"ID":"c4cd69b3-737d-4293-bbe0-426a284b5c3b","Type":"ContainerDied","Data":"5a06a997194e4d3c9f1726e73898d09884e1fdbc095c1f91d87f844422cc8248"} Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.078150 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a06a997194e4d3c9f1726e73898d09884e1fdbc095c1f91d87f844422cc8248" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.078202 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.140738 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg"] Dec 06 03:38:23 crc kubenswrapper[4801]: E1206 03:38:23.143242 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cd69b3-737d-4293-bbe0-426a284b5c3b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.143339 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cd69b3-737d-4293-bbe0-426a284b5c3b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.143571 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cd69b3-737d-4293-bbe0-426a284b5c3b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.144480 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.147163 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.147402 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.147660 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.148306 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.153350 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg"] Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.329798 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6d6\" (UniqueName: \"kubernetes.io/projected/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-kube-api-access-jj6d6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.330103 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.330234 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.432024 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6d6\" (UniqueName: \"kubernetes.io/projected/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-kube-api-access-jj6d6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.432083 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.432128 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.437590 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.438128 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.450367 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6d6\" (UniqueName: \"kubernetes.io/projected/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-kube-api-access-jj6d6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-99vwg\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:23 crc kubenswrapper[4801]: I1206 03:38:23.461904 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:38:24 crc kubenswrapper[4801]: I1206 03:38:24.000256 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg"] Dec 06 03:38:24 crc kubenswrapper[4801]: I1206 03:38:24.086625 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" event={"ID":"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6","Type":"ContainerStarted","Data":"c8ff9079990ecf824279a86815f28fcc9860173211459da096c182626d3f6467"} Dec 06 03:38:25 crc kubenswrapper[4801]: I1206 03:38:25.095124 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" event={"ID":"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6","Type":"ContainerStarted","Data":"d55220de087ed10413e0eae681cc376041d70b8b7ed3000153c1e4ba2b927ed0"} Dec 06 03:38:25 crc kubenswrapper[4801]: I1206 03:38:25.114556 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" podStartSLOduration=1.662517799 podStartE2EDuration="2.114536049s" podCreationTimestamp="2025-12-06 03:38:23 +0000 UTC" firstStartedPulling="2025-12-06 03:38:24.002360962 +0000 UTC m=+1957.124968534" lastFinishedPulling="2025-12-06 03:38:24.454379212 +0000 UTC m=+1957.576986784" observedRunningTime="2025-12-06 03:38:25.110442789 +0000 UTC m=+1958.233050361" watchObservedRunningTime="2025-12-06 03:38:25.114536049 +0000 UTC m=+1958.237143621" Dec 06 03:38:35 crc kubenswrapper[4801]: I1206 03:38:35.031740 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7s22l"] Dec 06 03:38:35 crc kubenswrapper[4801]: I1206 03:38:35.038880 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7s22l"] Dec 06 03:38:35 crc kubenswrapper[4801]: I1206 03:38:35.221990 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8a2ead4-9b5d-465c-9b4a-5c7377ad246f" path="/var/lib/kubelet/pods/e8a2ead4-9b5d-465c-9b4a-5c7377ad246f/volumes" Dec 06 03:38:38 crc kubenswrapper[4801]: I1206 03:38:38.035129 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-8db5z"] Dec 06 03:38:38 crc kubenswrapper[4801]: I1206 03:38:38.043014 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-8db5z"] Dec 06 03:38:39 crc kubenswrapper[4801]: I1206 03:38:39.223198 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fef54f-ef5f-4e2b-b0d1-d4ce567280fb" path="/var/lib/kubelet/pods/57fef54f-ef5f-4e2b-b0d1-d4ce567280fb/volumes" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.114900 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b7gpz"] Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.117323 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.128719 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7gpz"] Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.203147 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ppcv\" (UniqueName: \"kubernetes.io/projected/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-kube-api-access-6ppcv\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.203268 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-catalog-content\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.203321 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-utilities\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.304944 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-utilities\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.305083 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ppcv\" (UniqueName: \"kubernetes.io/projected/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-kube-api-access-6ppcv\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.305159 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-catalog-content\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.305624 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-utilities\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.305660 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-catalog-content\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.329120 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ppcv\" (UniqueName: \"kubernetes.io/projected/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-kube-api-access-6ppcv\") pod \"redhat-operators-b7gpz\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.488225 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:38:52 crc kubenswrapper[4801]: I1206 03:38:52.992515 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b7gpz"] Dec 06 03:38:53 crc kubenswrapper[4801]: I1206 03:38:53.324950 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerStarted","Data":"bb0df5a5129628cc18731e4f79b9ba91b393506be248d9d687061d755aaf7b34"} Dec 06 03:38:53 crc kubenswrapper[4801]: I1206 03:38:53.324988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerStarted","Data":"4869994e4615d37713bedc91baf19f073c08af4bff2cdcc21e4279d4d9d43fd0"} Dec 06 03:38:53 crc kubenswrapper[4801]: I1206 03:38:53.326555 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:38:54 crc kubenswrapper[4801]: I1206 03:38:54.335048 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerID="bb0df5a5129628cc18731e4f79b9ba91b393506be248d9d687061d755aaf7b34" exitCode=0 Dec 06 03:38:54 crc kubenswrapper[4801]: I1206 03:38:54.335262 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerDied","Data":"bb0df5a5129628cc18731e4f79b9ba91b393506be248d9d687061d755aaf7b34"} Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.103632 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pws28"] Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.105886 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.123255 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pws28"] Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.160403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-catalog-content\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.160833 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflph\" (UniqueName: \"kubernetes.io/projected/73d500de-c46d-45b8-a379-eba0e670f1af-kube-api-access-bflph\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.160984 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-utilities\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.262866 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bflph\" (UniqueName: \"kubernetes.io/projected/73d500de-c46d-45b8-a379-eba0e670f1af-kube-api-access-bflph\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.262949 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-utilities\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.263012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-catalog-content\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.263466 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-catalog-content\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.263858 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-utilities\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.286615 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bflph\" (UniqueName: \"kubernetes.io/projected/73d500de-c46d-45b8-a379-eba0e670f1af-kube-api-access-bflph\") pod \"certified-operators-pws28\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.344666 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerID="df29ce01de1401c0724a552cc167bb5a24de41e88b4cf3bb608b8ce54c3ca124" exitCode=0 Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.344710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerDied","Data":"df29ce01de1401c0724a552cc167bb5a24de41e88b4cf3bb608b8ce54c3ca124"} Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.426254 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:38:55 crc kubenswrapper[4801]: I1206 03:38:55.925870 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pws28"] Dec 06 03:38:56 crc kubenswrapper[4801]: W1206 03:38:56.371964 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73d500de_c46d_45b8_a379_eba0e670f1af.slice/crio-f377023c48ce2b56112d7ec1f3b7b70fb8eb693b7a1e24e102bd746e926c9989 WatchSource:0}: Error finding container f377023c48ce2b56112d7ec1f3b7b70fb8eb693b7a1e24e102bd746e926c9989: Status 404 returned error can't find the container with id f377023c48ce2b56112d7ec1f3b7b70fb8eb693b7a1e24e102bd746e926c9989 Dec 06 03:38:57 crc kubenswrapper[4801]: I1206 03:38:57.361573 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerStarted","Data":"f377023c48ce2b56112d7ec1f3b7b70fb8eb693b7a1e24e102bd746e926c9989"} Dec 06 03:38:58 crc kubenswrapper[4801]: I1206 03:38:58.368958 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerStarted","Data":"04c6f74a03cd0c71a0503a8defd1f17e75802e87d691c8c7012d40d6cd108fa8"} Dec 06 03:38:58 crc kubenswrapper[4801]: I1206 03:38:58.371959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerStarted","Data":"26945bc279868dead33d30cbc7acf0de0242ad5dd82b5bbab3a27604f14cd2fa"} Dec 06 03:38:59 crc kubenswrapper[4801]: I1206 03:38:59.382453 4801 generic.go:334] "Generic (PLEG): container finished" podID="73d500de-c46d-45b8-a379-eba0e670f1af" containerID="04c6f74a03cd0c71a0503a8defd1f17e75802e87d691c8c7012d40d6cd108fa8" exitCode=0 Dec 06 03:38:59 crc kubenswrapper[4801]: I1206 03:38:59.382652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerDied","Data":"04c6f74a03cd0c71a0503a8defd1f17e75802e87d691c8c7012d40d6cd108fa8"} Dec 06 03:38:59 crc kubenswrapper[4801]: I1206 03:38:59.410388 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b7gpz" podStartSLOduration=4.087310273 podStartE2EDuration="7.410328807s" podCreationTimestamp="2025-12-06 03:38:52 +0000 UTC" firstStartedPulling="2025-12-06 03:38:53.326322567 +0000 UTC m=+1986.448930139" lastFinishedPulling="2025-12-06 03:38:56.649341101 +0000 UTC m=+1989.771948673" observedRunningTime="2025-12-06 03:38:58.403866943 +0000 UTC m=+1991.526474545" watchObservedRunningTime="2025-12-06 03:38:59.410328807 +0000 UTC m=+1992.532936379" Dec 06 03:39:00 crc kubenswrapper[4801]: I1206 03:39:00.395038 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerStarted","Data":"91ef0e5ad99e50ec46befbb0f595cc479e0e2a7bd3b92f0057234d9d8ba154da"} Dec 06 03:39:01 crc kubenswrapper[4801]: I1206 03:39:01.404176 4801 generic.go:334] "Generic (PLEG): container finished" podID="73d500de-c46d-45b8-a379-eba0e670f1af" containerID="91ef0e5ad99e50ec46befbb0f595cc479e0e2a7bd3b92f0057234d9d8ba154da" exitCode=0 Dec 06 03:39:01 crc kubenswrapper[4801]: I1206 03:39:01.404232 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerDied","Data":"91ef0e5ad99e50ec46befbb0f595cc479e0e2a7bd3b92f0057234d9d8ba154da"} Dec 06 03:39:02 crc kubenswrapper[4801]: I1206 03:39:02.413705 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerStarted","Data":"4e2b5a8c3fbe216448d329dbe0657b1ae0f7100e22aa9acaf5733c395aa013ae"} Dec 06 03:39:02 crc kubenswrapper[4801]: I1206 03:39:02.451233 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pws28" podStartSLOduration=5.047180312 podStartE2EDuration="7.451212982s" podCreationTimestamp="2025-12-06 03:38:55 +0000 UTC" firstStartedPulling="2025-12-06 03:38:59.38458445 +0000 UTC m=+1992.507192032" lastFinishedPulling="2025-12-06 03:39:01.78861713 +0000 UTC m=+1994.911224702" observedRunningTime="2025-12-06 03:39:02.446866475 +0000 UTC m=+1995.569474047" watchObservedRunningTime="2025-12-06 03:39:02.451212982 +0000 UTC m=+1995.573820544" Dec 06 03:39:02 crc kubenswrapper[4801]: I1206 03:39:02.489596 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:39:02 crc kubenswrapper[4801]: I1206 03:39:02.489644 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:39:03 crc kubenswrapper[4801]: I1206 03:39:03.537384 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b7gpz" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="registry-server" probeResult="failure" output=< Dec 06 03:39:03 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 03:39:03 crc kubenswrapper[4801]: > Dec 06 03:39:05 crc kubenswrapper[4801]: I1206 03:39:05.426844 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:39:05 crc kubenswrapper[4801]: I1206 03:39:05.426913 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:39:05 crc kubenswrapper[4801]: I1206 03:39:05.472295 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:39:07 crc kubenswrapper[4801]: I1206 03:39:07.460553 4801 generic.go:334] "Generic (PLEG): container finished" podID="9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" containerID="d55220de087ed10413e0eae681cc376041d70b8b7ed3000153c1e4ba2b927ed0" exitCode=0 Dec 06 03:39:07 crc kubenswrapper[4801]: I1206 03:39:07.461319 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" event={"ID":"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6","Type":"ContainerDied","Data":"d55220de087ed10413e0eae681cc376041d70b8b7ed3000153c1e4ba2b927ed0"} Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.840454 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.930318 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-inventory\") pod \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.930534 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-ssh-key\") pod \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.930688 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj6d6\" (UniqueName: \"kubernetes.io/projected/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-kube-api-access-jj6d6\") pod \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\" (UID: \"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6\") " Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.936675 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-kube-api-access-jj6d6" (OuterVolumeSpecName: "kube-api-access-jj6d6") pod "9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" (UID: "9d90de88-52ef-4cbf-a0e3-1b31a853cbf6"). InnerVolumeSpecName "kube-api-access-jj6d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.958698 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" (UID: "9d90de88-52ef-4cbf-a0e3-1b31a853cbf6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:39:08 crc kubenswrapper[4801]: I1206 03:39:08.965663 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-inventory" (OuterVolumeSpecName: "inventory") pod "9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" (UID: "9d90de88-52ef-4cbf-a0e3-1b31a853cbf6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.032706 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj6d6\" (UniqueName: \"kubernetes.io/projected/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-kube-api-access-jj6d6\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.032739 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.032748 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.476386 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" event={"ID":"9d90de88-52ef-4cbf-a0e3-1b31a853cbf6","Type":"ContainerDied","Data":"c8ff9079990ecf824279a86815f28fcc9860173211459da096c182626d3f6467"} Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.476655 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8ff9079990ecf824279a86815f28fcc9860173211459da096c182626d3f6467" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.476434 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.559448 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz"] Dec 06 03:39:09 crc kubenswrapper[4801]: E1206 03:39:09.559924 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.559951 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.560208 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.560941 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.563732 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.566162 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.566717 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.566722 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.571892 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz"] Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.645721 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.646026 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.646236 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5k2n\" (UniqueName: \"kubernetes.io/projected/94086867-d5b4-4c97-9f39-2df6a18bd4b7-kube-api-access-d5k2n\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.747525 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.747939 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.748287 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5k2n\" (UniqueName: \"kubernetes.io/projected/94086867-d5b4-4c97-9f39-2df6a18bd4b7-kube-api-access-d5k2n\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.752126 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.752148 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.775304 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5k2n\" (UniqueName: \"kubernetes.io/projected/94086867-d5b4-4c97-9f39-2df6a18bd4b7-kube-api-access-d5k2n\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:09 crc kubenswrapper[4801]: I1206 03:39:09.877211 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:10 crc kubenswrapper[4801]: I1206 03:39:10.404270 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz"] Dec 06 03:39:10 crc kubenswrapper[4801]: I1206 03:39:10.487314 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" event={"ID":"94086867-d5b4-4c97-9f39-2df6a18bd4b7","Type":"ContainerStarted","Data":"33dad6b55215cf3c3f799ea7f1abf5a6f3aee41c616dca1297967474a8ab9dae"} Dec 06 03:39:12 crc kubenswrapper[4801]: I1206 03:39:12.551844 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:39:12 crc kubenswrapper[4801]: I1206 03:39:12.593452 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:39:13 crc kubenswrapper[4801]: I1206 03:39:13.660232 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7gpz"] Dec 06 03:39:14 crc kubenswrapper[4801]: I1206 03:39:14.516345 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b7gpz" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="registry-server" containerID="cri-o://26945bc279868dead33d30cbc7acf0de0242ad5dd82b5bbab3a27604f14cd2fa" gracePeriod=2 Dec 06 03:39:14 crc kubenswrapper[4801]: I1206 03:39:14.517459 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" event={"ID":"94086867-d5b4-4c97-9f39-2df6a18bd4b7","Type":"ContainerStarted","Data":"8ecec5e2acc1c36e977ecb87ddbf62cf6b2efc24c1854efaec2b5f128882562f"} Dec 06 03:39:14 crc kubenswrapper[4801]: I1206 03:39:14.538910 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" podStartSLOduration=2.728804103 podStartE2EDuration="5.538895638s" podCreationTimestamp="2025-12-06 03:39:09 +0000 UTC" firstStartedPulling="2025-12-06 03:39:10.411576843 +0000 UTC m=+2003.534184415" lastFinishedPulling="2025-12-06 03:39:13.221668368 +0000 UTC m=+2006.344275950" observedRunningTime="2025-12-06 03:39:14.535085715 +0000 UTC m=+2007.657693297" watchObservedRunningTime="2025-12-06 03:39:14.538895638 +0000 UTC m=+2007.661503210" Dec 06 03:39:15 crc kubenswrapper[4801]: I1206 03:39:15.041449 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bc6j9"] Dec 06 03:39:15 crc kubenswrapper[4801]: I1206 03:39:15.049317 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bc6j9"] Dec 06 03:39:15 crc kubenswrapper[4801]: I1206 03:39:15.223022 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec233495-e3c7-4268-8eb9-532e73143533" path="/var/lib/kubelet/pods/ec233495-e3c7-4268-8eb9-532e73143533/volumes" Dec 06 03:39:15 crc kubenswrapper[4801]: I1206 03:39:15.477922 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:39:16 crc kubenswrapper[4801]: I1206 03:39:16.057662 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pws28"] Dec 06 03:39:16 crc kubenswrapper[4801]: I1206 03:39:16.059216 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pws28" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="registry-server" containerID="cri-o://4e2b5a8c3fbe216448d329dbe0657b1ae0f7100e22aa9acaf5733c395aa013ae" gracePeriod=2 Dec 06 03:39:16 crc kubenswrapper[4801]: I1206 03:39:16.558963 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerID="26945bc279868dead33d30cbc7acf0de0242ad5dd82b5bbab3a27604f14cd2fa" exitCode=0 Dec 06 03:39:16 crc kubenswrapper[4801]: I1206 03:39:16.559007 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerDied","Data":"26945bc279868dead33d30cbc7acf0de0242ad5dd82b5bbab3a27604f14cd2fa"} Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:16.992267 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.083097 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-catalog-content\") pod \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.083348 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ppcv\" (UniqueName: \"kubernetes.io/projected/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-kube-api-access-6ppcv\") pod \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.083385 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-utilities\") pod \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\" (UID: \"fd30fc0e-d407-40cf-9857-aa61f2a84b8b\") " Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.084339 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-utilities" (OuterVolumeSpecName: "utilities") pod "fd30fc0e-d407-40cf-9857-aa61f2a84b8b" (UID: "fd30fc0e-d407-40cf-9857-aa61f2a84b8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.089262 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-kube-api-access-6ppcv" (OuterVolumeSpecName: "kube-api-access-6ppcv") pod "fd30fc0e-d407-40cf-9857-aa61f2a84b8b" (UID: "fd30fc0e-d407-40cf-9857-aa61f2a84b8b"). InnerVolumeSpecName "kube-api-access-6ppcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.184657 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ppcv\" (UniqueName: \"kubernetes.io/projected/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-kube-api-access-6ppcv\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.184691 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.205122 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd30fc0e-d407-40cf-9857-aa61f2a84b8b" (UID: "fd30fc0e-d407-40cf-9857-aa61f2a84b8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.286626 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd30fc0e-d407-40cf-9857-aa61f2a84b8b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.569459 4801 generic.go:334] "Generic (PLEG): container finished" podID="73d500de-c46d-45b8-a379-eba0e670f1af" containerID="4e2b5a8c3fbe216448d329dbe0657b1ae0f7100e22aa9acaf5733c395aa013ae" exitCode=0 Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.569593 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerDied","Data":"4e2b5a8c3fbe216448d329dbe0657b1ae0f7100e22aa9acaf5733c395aa013ae"} Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.572257 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b7gpz" event={"ID":"fd30fc0e-d407-40cf-9857-aa61f2a84b8b","Type":"ContainerDied","Data":"4869994e4615d37713bedc91baf19f073c08af4bff2cdcc21e4279d4d9d43fd0"} Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.572301 4801 scope.go:117] "RemoveContainer" containerID="26945bc279868dead33d30cbc7acf0de0242ad5dd82b5bbab3a27604f14cd2fa" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.572320 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b7gpz" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.594788 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b7gpz"] Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.607947 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b7gpz"] Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.609989 4801 scope.go:117] "RemoveContainer" containerID="df29ce01de1401c0724a552cc167bb5a24de41e88b4cf3bb608b8ce54c3ca124" Dec 06 03:39:17 crc kubenswrapper[4801]: I1206 03:39:17.647309 4801 scope.go:117] "RemoveContainer" containerID="bb0df5a5129628cc18731e4f79b9ba91b393506be248d9d687061d755aaf7b34" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.032381 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-59fe-account-create-update-zp867"] Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.054737 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-59fe-account-create-update-zp867"] Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.318017 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.403259 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bflph\" (UniqueName: \"kubernetes.io/projected/73d500de-c46d-45b8-a379-eba0e670f1af-kube-api-access-bflph\") pod \"73d500de-c46d-45b8-a379-eba0e670f1af\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.403331 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-utilities\") pod \"73d500de-c46d-45b8-a379-eba0e670f1af\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.403391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-catalog-content\") pod \"73d500de-c46d-45b8-a379-eba0e670f1af\" (UID: \"73d500de-c46d-45b8-a379-eba0e670f1af\") " Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.404555 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-utilities" (OuterVolumeSpecName: "utilities") pod "73d500de-c46d-45b8-a379-eba0e670f1af" (UID: "73d500de-c46d-45b8-a379-eba0e670f1af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.414956 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d500de-c46d-45b8-a379-eba0e670f1af-kube-api-access-bflph" (OuterVolumeSpecName: "kube-api-access-bflph") pod "73d500de-c46d-45b8-a379-eba0e670f1af" (UID: "73d500de-c46d-45b8-a379-eba0e670f1af"). InnerVolumeSpecName "kube-api-access-bflph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.448091 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73d500de-c46d-45b8-a379-eba0e670f1af" (UID: "73d500de-c46d-45b8-a379-eba0e670f1af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.505077 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bflph\" (UniqueName: \"kubernetes.io/projected/73d500de-c46d-45b8-a379-eba0e670f1af-kube-api-access-bflph\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.505106 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.505115 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73d500de-c46d-45b8-a379-eba0e670f1af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.582448 4801 generic.go:334] "Generic (PLEG): container finished" podID="94086867-d5b4-4c97-9f39-2df6a18bd4b7" containerID="8ecec5e2acc1c36e977ecb87ddbf62cf6b2efc24c1854efaec2b5f128882562f" exitCode=0 Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.582515 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" event={"ID":"94086867-d5b4-4c97-9f39-2df6a18bd4b7","Type":"ContainerDied","Data":"8ecec5e2acc1c36e977ecb87ddbf62cf6b2efc24c1854efaec2b5f128882562f"} Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.586390 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pws28" event={"ID":"73d500de-c46d-45b8-a379-eba0e670f1af","Type":"ContainerDied","Data":"f377023c48ce2b56112d7ec1f3b7b70fb8eb693b7a1e24e102bd746e926c9989"} Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.586427 4801 scope.go:117] "RemoveContainer" containerID="4e2b5a8c3fbe216448d329dbe0657b1ae0f7100e22aa9acaf5733c395aa013ae" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.586554 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pws28" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.606873 4801 scope.go:117] "RemoveContainer" containerID="91ef0e5ad99e50ec46befbb0f595cc479e0e2a7bd3b92f0057234d9d8ba154da" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.625555 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pws28"] Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.630539 4801 scope.go:117] "RemoveContainer" containerID="04c6f74a03cd0c71a0503a8defd1f17e75802e87d691c8c7012d40d6cd108fa8" Dec 06 03:39:18 crc kubenswrapper[4801]: I1206 03:39:18.633604 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pws28"] Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.040528 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-cr4fx"] Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.049003 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-kp8cl"] Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.057484 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-kp8cl"] Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.064895 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-cr4fx"] Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.223163 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" path="/var/lib/kubelet/pods/73d500de-c46d-45b8-a379-eba0e670f1af/volumes" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.224237 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b03c3261-5b37-43cb-8148-a9e709c13a1e" path="/var/lib/kubelet/pods/b03c3261-5b37-43cb-8148-a9e709c13a1e/volumes" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.224989 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d727fb0f-a514-492e-9e91-df76ceccf42d" path="/var/lib/kubelet/pods/d727fb0f-a514-492e-9e91-df76ceccf42d/volumes" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.226012 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc374618-2dac-4256-9048-76b3774d35b8" path="/var/lib/kubelet/pods/fc374618-2dac-4256-9048-76b3774d35b8/volumes" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.226546 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" path="/var/lib/kubelet/pods/fd30fc0e-d407-40cf-9857-aa61f2a84b8b/volumes" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.551061 4801 scope.go:117] "RemoveContainer" containerID="bd01a7222b8a1d54818206a0aff6ac828bb3c5b9bd07f5471d12ccb1f6f06d07" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.594019 4801 scope.go:117] "RemoveContainer" containerID="9bf96e20d2bc4a9fce320775cbbcf65124f692b13a62efc9d077fc043b695dd0" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.629907 4801 scope.go:117] "RemoveContainer" containerID="b929d467eea811d7bb7b6b5814208db044bb55663ad964587efa6bd04d133433" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.716570 4801 scope.go:117] "RemoveContainer" containerID="3b3be67890a1583b30d351a35d619c24a8bc1cfd74cc1985eaa99468a4ee1904" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.752175 4801 scope.go:117] "RemoveContainer" containerID="5d5c37e3b6a3b18af919da0a5823fb1c123a0f4a1461e56cc227aec4964136b9" Dec 06 03:39:19 crc kubenswrapper[4801]: I1206 03:39:19.794460 4801 scope.go:117] "RemoveContainer" containerID="11a8202632d51f834f18df9d67cc3ddb61aa2203ef31bb28adad5818a2d887a8" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.024833 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-be62-account-create-update-pxz7k"] Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.033018 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-be62-account-create-update-pxz7k"] Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.097125 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.130796 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5k2n\" (UniqueName: \"kubernetes.io/projected/94086867-d5b4-4c97-9f39-2df6a18bd4b7-kube-api-access-d5k2n\") pod \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.131070 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-ssh-key\") pod \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.131128 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-inventory\") pod \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\" (UID: \"94086867-d5b4-4c97-9f39-2df6a18bd4b7\") " Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.136161 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94086867-d5b4-4c97-9f39-2df6a18bd4b7-kube-api-access-d5k2n" (OuterVolumeSpecName: "kube-api-access-d5k2n") pod "94086867-d5b4-4c97-9f39-2df6a18bd4b7" (UID: "94086867-d5b4-4c97-9f39-2df6a18bd4b7"). InnerVolumeSpecName "kube-api-access-d5k2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.155466 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-inventory" (OuterVolumeSpecName: "inventory") pod "94086867-d5b4-4c97-9f39-2df6a18bd4b7" (UID: "94086867-d5b4-4c97-9f39-2df6a18bd4b7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.159850 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "94086867-d5b4-4c97-9f39-2df6a18bd4b7" (UID: "94086867-d5b4-4c97-9f39-2df6a18bd4b7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.233671 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.233718 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/94086867-d5b4-4c97-9f39-2df6a18bd4b7-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.233741 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5k2n\" (UniqueName: \"kubernetes.io/projected/94086867-d5b4-4c97-9f39-2df6a18bd4b7-kube-api-access-d5k2n\") on node \"crc\" DevicePath \"\"" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.649447 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" event={"ID":"94086867-d5b4-4c97-9f39-2df6a18bd4b7","Type":"ContainerDied","Data":"33dad6b55215cf3c3f799ea7f1abf5a6f3aee41c616dca1297967474a8ab9dae"} Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.649489 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dad6b55215cf3c3f799ea7f1abf5a6f3aee41c616dca1297967474a8ab9dae" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.649511 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.689662 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6"] Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690017 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="extract-content" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690035 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="extract-content" Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690048 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="extract-utilities" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690055 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="extract-utilities" Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690064 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="extract-utilities" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690072 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="extract-utilities" Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690085 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="extract-content" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690091 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="extract-content" Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690102 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="registry-server" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690109 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="registry-server" Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690126 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="registry-server" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690132 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="registry-server" Dec 06 03:39:20 crc kubenswrapper[4801]: E1206 03:39:20.690142 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94086867-d5b4-4c97-9f39-2df6a18bd4b7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690150 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="94086867-d5b4-4c97-9f39-2df6a18bd4b7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690332 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="94086867-d5b4-4c97-9f39-2df6a18bd4b7" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690347 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd30fc0e-d407-40cf-9857-aa61f2a84b8b" containerName="registry-server" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.690366 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d500de-c46d-45b8-a379-eba0e670f1af" containerName="registry-server" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.695206 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.697205 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.697622 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.698732 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.698955 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.711567 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6"] Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.743017 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f99nw\" (UniqueName: \"kubernetes.io/projected/caca002a-afc5-45e5-9400-3f8ba6b0978a-kube-api-access-f99nw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.743222 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.743260 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.845081 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.845128 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.845877 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f99nw\" (UniqueName: \"kubernetes.io/projected/caca002a-afc5-45e5-9400-3f8ba6b0978a-kube-api-access-f99nw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.849894 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.850174 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:20 crc kubenswrapper[4801]: I1206 03:39:20.862340 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f99nw\" (UniqueName: \"kubernetes.io/projected/caca002a-afc5-45e5-9400-3f8ba6b0978a-kube-api-access-f99nw\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:21 crc kubenswrapper[4801]: I1206 03:39:21.057279 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:39:21 crc kubenswrapper[4801]: I1206 03:39:21.229190 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3655d081-5002-4403-869e-e027935e4f0b" path="/var/lib/kubelet/pods/3655d081-5002-4403-869e-e027935e4f0b/volumes" Dec 06 03:39:21 crc kubenswrapper[4801]: I1206 03:39:21.565040 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6"] Dec 06 03:39:21 crc kubenswrapper[4801]: I1206 03:39:21.671240 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" event={"ID":"caca002a-afc5-45e5-9400-3f8ba6b0978a","Type":"ContainerStarted","Data":"bba9dc126116a6568425bab73d3e92b045d4a812d95f712782a5db90e3918d1e"} Dec 06 03:39:23 crc kubenswrapper[4801]: I1206 03:39:23.036806 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2df8-account-create-update-w9l8q"] Dec 06 03:39:23 crc kubenswrapper[4801]: I1206 03:39:23.044794 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2df8-account-create-update-w9l8q"] Dec 06 03:39:23 crc kubenswrapper[4801]: I1206 03:39:23.223541 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e58a01-f644-4664-8d9b-f7c22938e4aa" path="/var/lib/kubelet/pods/80e58a01-f644-4664-8d9b-f7c22938e4aa/volumes" Dec 06 03:39:23 crc kubenswrapper[4801]: I1206 03:39:23.689639 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" event={"ID":"caca002a-afc5-45e5-9400-3f8ba6b0978a","Type":"ContainerStarted","Data":"e592898de4c814a905126127c7c946e0ee478b696ac268e2e3a07d184ec54ba8"} Dec 06 03:39:23 crc kubenswrapper[4801]: I1206 03:39:23.713834 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" podStartSLOduration=2.616952372 podStartE2EDuration="3.713808737s" podCreationTimestamp="2025-12-06 03:39:20 +0000 UTC" firstStartedPulling="2025-12-06 03:39:21.576350817 +0000 UTC m=+2014.698958389" lastFinishedPulling="2025-12-06 03:39:22.673207192 +0000 UTC m=+2015.795814754" observedRunningTime="2025-12-06 03:39:23.708024325 +0000 UTC m=+2016.830631897" watchObservedRunningTime="2025-12-06 03:39:23.713808737 +0000 UTC m=+2016.836416319" Dec 06 03:40:11 crc kubenswrapper[4801]: I1206 03:40:11.170256 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:40:11 crc kubenswrapper[4801]: I1206 03:40:11.171310 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:40:18 crc kubenswrapper[4801]: I1206 03:40:18.046508 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7h8zf"] Dec 06 03:40:18 crc kubenswrapper[4801]: I1206 03:40:18.056551 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7h8zf"] Dec 06 03:40:19 crc kubenswrapper[4801]: I1206 03:40:19.225815 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23456664-b3cb-40c4-a0a1-a944eef10179" path="/var/lib/kubelet/pods/23456664-b3cb-40c4-a0a1-a944eef10179/volumes" Dec 06 03:40:19 crc kubenswrapper[4801]: I1206 03:40:19.951995 4801 scope.go:117] "RemoveContainer" containerID="3d05b0fee21513e52b9337a18e940c9bf240916f4dd9ea09c1e9127034c69531" Dec 06 03:40:19 crc kubenswrapper[4801]: I1206 03:40:19.984413 4801 scope.go:117] "RemoveContainer" containerID="0b3cc23b79243a74ecad6499497fb48a0a57df82fb2f6070413c9f8149e8d7e1" Dec 06 03:40:20 crc kubenswrapper[4801]: I1206 03:40:20.111897 4801 scope.go:117] "RemoveContainer" containerID="8cf1887b8275b3cb968c161a9697a9fddb6b57fb7b4ea9f0e06b4576687a7c37" Dec 06 03:40:20 crc kubenswrapper[4801]: I1206 03:40:20.197907 4801 generic.go:334] "Generic (PLEG): container finished" podID="caca002a-afc5-45e5-9400-3f8ba6b0978a" containerID="e592898de4c814a905126127c7c946e0ee478b696ac268e2e3a07d184ec54ba8" exitCode=0 Dec 06 03:40:20 crc kubenswrapper[4801]: I1206 03:40:20.197988 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" event={"ID":"caca002a-afc5-45e5-9400-3f8ba6b0978a","Type":"ContainerDied","Data":"e592898de4c814a905126127c7c946e0ee478b696ac268e2e3a07d184ec54ba8"} Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.657025 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.697979 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f99nw\" (UniqueName: \"kubernetes.io/projected/caca002a-afc5-45e5-9400-3f8ba6b0978a-kube-api-access-f99nw\") pod \"caca002a-afc5-45e5-9400-3f8ba6b0978a\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.698279 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-ssh-key\") pod \"caca002a-afc5-45e5-9400-3f8ba6b0978a\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.698391 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-inventory\") pod \"caca002a-afc5-45e5-9400-3f8ba6b0978a\" (UID: \"caca002a-afc5-45e5-9400-3f8ba6b0978a\") " Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.703785 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caca002a-afc5-45e5-9400-3f8ba6b0978a-kube-api-access-f99nw" (OuterVolumeSpecName: "kube-api-access-f99nw") pod "caca002a-afc5-45e5-9400-3f8ba6b0978a" (UID: "caca002a-afc5-45e5-9400-3f8ba6b0978a"). InnerVolumeSpecName "kube-api-access-f99nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.724300 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-inventory" (OuterVolumeSpecName: "inventory") pod "caca002a-afc5-45e5-9400-3f8ba6b0978a" (UID: "caca002a-afc5-45e5-9400-3f8ba6b0978a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.724638 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "caca002a-afc5-45e5-9400-3f8ba6b0978a" (UID: "caca002a-afc5-45e5-9400-3f8ba6b0978a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.799930 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.800235 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/caca002a-afc5-45e5-9400-3f8ba6b0978a-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:21 crc kubenswrapper[4801]: I1206 03:40:21.800245 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f99nw\" (UniqueName: \"kubernetes.io/projected/caca002a-afc5-45e5-9400-3f8ba6b0978a-kube-api-access-f99nw\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.215409 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" event={"ID":"caca002a-afc5-45e5-9400-3f8ba6b0978a","Type":"ContainerDied","Data":"bba9dc126116a6568425bab73d3e92b045d4a812d95f712782a5db90e3918d1e"} Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.215450 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.215470 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba9dc126116a6568425bab73d3e92b045d4a812d95f712782a5db90e3918d1e" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.299269 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4552d"] Dec 06 03:40:22 crc kubenswrapper[4801]: E1206 03:40:22.299687 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="caca002a-afc5-45e5-9400-3f8ba6b0978a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.299707 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="caca002a-afc5-45e5-9400-3f8ba6b0978a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.302254 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="caca002a-afc5-45e5-9400-3f8ba6b0978a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.303076 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.308536 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4552d"] Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.309877 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.310202 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.310962 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.311737 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.408954 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtbdn\" (UniqueName: \"kubernetes.io/projected/56777df7-ed53-4b2c-af02-b24ce707927e-kube-api-access-mtbdn\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.409036 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.409114 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.511477 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtbdn\" (UniqueName: \"kubernetes.io/projected/56777df7-ed53-4b2c-af02-b24ce707927e-kube-api-access-mtbdn\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.511564 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.511660 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.515617 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.516912 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.540452 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtbdn\" (UniqueName: \"kubernetes.io/projected/56777df7-ed53-4b2c-af02-b24ce707927e-kube-api-access-mtbdn\") pod \"ssh-known-hosts-edpm-deployment-4552d\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:22 crc kubenswrapper[4801]: I1206 03:40:22.630045 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:23 crc kubenswrapper[4801]: I1206 03:40:23.171726 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4552d"] Dec 06 03:40:23 crc kubenswrapper[4801]: I1206 03:40:23.224018 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" event={"ID":"56777df7-ed53-4b2c-af02-b24ce707927e","Type":"ContainerStarted","Data":"0c8b1d6947a0cb5da400cefcf73d9a964357214d4b0054d437f97c7a75f2883d"} Dec 06 03:40:25 crc kubenswrapper[4801]: I1206 03:40:25.241563 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" event={"ID":"56777df7-ed53-4b2c-af02-b24ce707927e","Type":"ContainerStarted","Data":"9fd057034ed0a44b614e19d79bf52cd42ea98b353360c05f33c350d4320d9910"} Dec 06 03:40:25 crc kubenswrapper[4801]: I1206 03:40:25.260735 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" podStartSLOduration=2.137968851 podStartE2EDuration="3.260711395s" podCreationTimestamp="2025-12-06 03:40:22 +0000 UTC" firstStartedPulling="2025-12-06 03:40:23.169920123 +0000 UTC m=+2076.292527695" lastFinishedPulling="2025-12-06 03:40:24.292662667 +0000 UTC m=+2077.415270239" observedRunningTime="2025-12-06 03:40:25.258048185 +0000 UTC m=+2078.380655757" watchObservedRunningTime="2025-12-06 03:40:25.260711395 +0000 UTC m=+2078.383318957" Dec 06 03:40:33 crc kubenswrapper[4801]: I1206 03:40:33.313271 4801 generic.go:334] "Generic (PLEG): container finished" podID="56777df7-ed53-4b2c-af02-b24ce707927e" containerID="9fd057034ed0a44b614e19d79bf52cd42ea98b353360c05f33c350d4320d9910" exitCode=0 Dec 06 03:40:33 crc kubenswrapper[4801]: I1206 03:40:33.313349 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" event={"ID":"56777df7-ed53-4b2c-af02-b24ce707927e","Type":"ContainerDied","Data":"9fd057034ed0a44b614e19d79bf52cd42ea98b353360c05f33c350d4320d9910"} Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.702519 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.848907 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-ssh-key-openstack-edpm-ipam\") pod \"56777df7-ed53-4b2c-af02-b24ce707927e\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.849035 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-inventory-0\") pod \"56777df7-ed53-4b2c-af02-b24ce707927e\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.849148 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtbdn\" (UniqueName: \"kubernetes.io/projected/56777df7-ed53-4b2c-af02-b24ce707927e-kube-api-access-mtbdn\") pod \"56777df7-ed53-4b2c-af02-b24ce707927e\" (UID: \"56777df7-ed53-4b2c-af02-b24ce707927e\") " Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.854361 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56777df7-ed53-4b2c-af02-b24ce707927e-kube-api-access-mtbdn" (OuterVolumeSpecName: "kube-api-access-mtbdn") pod "56777df7-ed53-4b2c-af02-b24ce707927e" (UID: "56777df7-ed53-4b2c-af02-b24ce707927e"). InnerVolumeSpecName "kube-api-access-mtbdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.874226 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "56777df7-ed53-4b2c-af02-b24ce707927e" (UID: "56777df7-ed53-4b2c-af02-b24ce707927e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.878448 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56777df7-ed53-4b2c-af02-b24ce707927e" (UID: "56777df7-ed53-4b2c-af02-b24ce707927e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.951195 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.951530 4801 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/56777df7-ed53-4b2c-af02-b24ce707927e-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:34 crc kubenswrapper[4801]: I1206 03:40:34.951549 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtbdn\" (UniqueName: \"kubernetes.io/projected/56777df7-ed53-4b2c-af02-b24ce707927e-kube-api-access-mtbdn\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.331856 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" event={"ID":"56777df7-ed53-4b2c-af02-b24ce707927e","Type":"ContainerDied","Data":"0c8b1d6947a0cb5da400cefcf73d9a964357214d4b0054d437f97c7a75f2883d"} Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.331925 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c8b1d6947a0cb5da400cefcf73d9a964357214d4b0054d437f97c7a75f2883d" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.331896 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4552d" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.393343 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r"] Dec 06 03:40:35 crc kubenswrapper[4801]: E1206 03:40:35.393795 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56777df7-ed53-4b2c-af02-b24ce707927e" containerName="ssh-known-hosts-edpm-deployment" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.393816 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="56777df7-ed53-4b2c-af02-b24ce707927e" containerName="ssh-known-hosts-edpm-deployment" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.394021 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="56777df7-ed53-4b2c-af02-b24ce707927e" containerName="ssh-known-hosts-edpm-deployment" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.394643 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.398124 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.398849 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.399843 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.410101 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.425531 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r"] Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.461722 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.461855 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.461887 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp6jc\" (UniqueName: \"kubernetes.io/projected/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-kube-api-access-cp6jc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.563602 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.563683 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.563707 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp6jc\" (UniqueName: \"kubernetes.io/projected/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-kube-api-access-cp6jc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.568267 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.568872 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.579278 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp6jc\" (UniqueName: \"kubernetes.io/projected/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-kube-api-access-cp6jc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-kwq8r\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:35 crc kubenswrapper[4801]: I1206 03:40:35.713495 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:36 crc kubenswrapper[4801]: I1206 03:40:36.238416 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r"] Dec 06 03:40:36 crc kubenswrapper[4801]: W1206 03:40:36.241621 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0813e3d8_6857_4d8d_83ef_b43c1ff774e1.slice/crio-76bf41ae70777cff8a80824ac78743cb0e6614ac038f39075c14a78f30baf36d WatchSource:0}: Error finding container 76bf41ae70777cff8a80824ac78743cb0e6614ac038f39075c14a78f30baf36d: Status 404 returned error can't find the container with id 76bf41ae70777cff8a80824ac78743cb0e6614ac038f39075c14a78f30baf36d Dec 06 03:40:36 crc kubenswrapper[4801]: I1206 03:40:36.340608 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" event={"ID":"0813e3d8-6857-4d8d-83ef-b43c1ff774e1","Type":"ContainerStarted","Data":"76bf41ae70777cff8a80824ac78743cb0e6614ac038f39075c14a78f30baf36d"} Dec 06 03:40:37 crc kubenswrapper[4801]: I1206 03:40:37.349856 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" event={"ID":"0813e3d8-6857-4d8d-83ef-b43c1ff774e1","Type":"ContainerStarted","Data":"2c13e5134dec0e2a0f60a3aec3d87b345c6fbea18bdb5c564cfcbb5ab1f4eaa8"} Dec 06 03:40:37 crc kubenswrapper[4801]: I1206 03:40:37.376937 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" podStartSLOduration=1.9989947460000002 podStartE2EDuration="2.376913801s" podCreationTimestamp="2025-12-06 03:40:35 +0000 UTC" firstStartedPulling="2025-12-06 03:40:36.24556948 +0000 UTC m=+2089.368177052" lastFinishedPulling="2025-12-06 03:40:36.623488535 +0000 UTC m=+2089.746096107" observedRunningTime="2025-12-06 03:40:37.365124381 +0000 UTC m=+2090.487731953" watchObservedRunningTime="2025-12-06 03:40:37.376913801 +0000 UTC m=+2090.499521373" Dec 06 03:40:41 crc kubenswrapper[4801]: I1206 03:40:41.169961 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:40:41 crc kubenswrapper[4801]: I1206 03:40:41.170462 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:40:42 crc kubenswrapper[4801]: I1206 03:40:42.048214 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-tpqcc"] Dec 06 03:40:42 crc kubenswrapper[4801]: I1206 03:40:42.055689 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-tpqcc"] Dec 06 03:40:43 crc kubenswrapper[4801]: I1206 03:40:43.225353 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb686be-6bac-49fa-a164-543b9c1d7952" path="/var/lib/kubelet/pods/bfb686be-6bac-49fa-a164-543b9c1d7952/volumes" Dec 06 03:40:44 crc kubenswrapper[4801]: I1206 03:40:44.029695 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v9x44"] Dec 06 03:40:44 crc kubenswrapper[4801]: I1206 03:40:44.037491 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-v9x44"] Dec 06 03:40:45 crc kubenswrapper[4801]: I1206 03:40:45.221721 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="064e28c8-c61c-4012-8e99-c5996a34ff9d" path="/var/lib/kubelet/pods/064e28c8-c61c-4012-8e99-c5996a34ff9d/volumes" Dec 06 03:40:47 crc kubenswrapper[4801]: I1206 03:40:47.446218 4801 generic.go:334] "Generic (PLEG): container finished" podID="0813e3d8-6857-4d8d-83ef-b43c1ff774e1" containerID="2c13e5134dec0e2a0f60a3aec3d87b345c6fbea18bdb5c564cfcbb5ab1f4eaa8" exitCode=0 Dec 06 03:40:47 crc kubenswrapper[4801]: I1206 03:40:47.446294 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" event={"ID":"0813e3d8-6857-4d8d-83ef-b43c1ff774e1","Type":"ContainerDied","Data":"2c13e5134dec0e2a0f60a3aec3d87b345c6fbea18bdb5c564cfcbb5ab1f4eaa8"} Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.851108 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.902643 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-inventory\") pod \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.902865 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp6jc\" (UniqueName: \"kubernetes.io/projected/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-kube-api-access-cp6jc\") pod \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.902900 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-ssh-key\") pod \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\" (UID: \"0813e3d8-6857-4d8d-83ef-b43c1ff774e1\") " Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.910934 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-kube-api-access-cp6jc" (OuterVolumeSpecName: "kube-api-access-cp6jc") pod "0813e3d8-6857-4d8d-83ef-b43c1ff774e1" (UID: "0813e3d8-6857-4d8d-83ef-b43c1ff774e1"). InnerVolumeSpecName "kube-api-access-cp6jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.933933 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-inventory" (OuterVolumeSpecName: "inventory") pod "0813e3d8-6857-4d8d-83ef-b43c1ff774e1" (UID: "0813e3d8-6857-4d8d-83ef-b43c1ff774e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:40:48 crc kubenswrapper[4801]: I1206 03:40:48.941943 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0813e3d8-6857-4d8d-83ef-b43c1ff774e1" (UID: "0813e3d8-6857-4d8d-83ef-b43c1ff774e1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.004503 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp6jc\" (UniqueName: \"kubernetes.io/projected/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-kube-api-access-cp6jc\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.004544 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.004557 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0813e3d8-6857-4d8d-83ef-b43c1ff774e1-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.466793 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" event={"ID":"0813e3d8-6857-4d8d-83ef-b43c1ff774e1","Type":"ContainerDied","Data":"76bf41ae70777cff8a80824ac78743cb0e6614ac038f39075c14a78f30baf36d"} Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.466841 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76bf41ae70777cff8a80824ac78743cb0e6614ac038f39075c14a78f30baf36d" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.466840 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.529021 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs"] Dec 06 03:40:49 crc kubenswrapper[4801]: E1206 03:40:49.529357 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0813e3d8-6857-4d8d-83ef-b43c1ff774e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.529374 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0813e3d8-6857-4d8d-83ef-b43c1ff774e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.529532 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0813e3d8-6857-4d8d-83ef-b43c1ff774e1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.530089 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.532505 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.532963 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.533240 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.533396 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.539716 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs"] Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.617072 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.617370 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.617450 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzsf\" (UniqueName: \"kubernetes.io/projected/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-kube-api-access-ppzsf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.718975 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.719209 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppzsf\" (UniqueName: \"kubernetes.io/projected/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-kube-api-access-ppzsf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.719259 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.724490 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.729194 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.736458 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppzsf\" (UniqueName: \"kubernetes.io/projected/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-kube-api-access-ppzsf\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:49 crc kubenswrapper[4801]: I1206 03:40:49.889911 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:40:50 crc kubenswrapper[4801]: I1206 03:40:50.425269 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs"] Dec 06 03:40:50 crc kubenswrapper[4801]: I1206 03:40:50.475518 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" event={"ID":"4f8c82ec-4188-4cfb-8179-29d123ef6d8d","Type":"ContainerStarted","Data":"b58b6b62692ae6974fbde4bfc113fecdec0f0505f485669f76cace3bd70d71ed"} Dec 06 03:40:51 crc kubenswrapper[4801]: I1206 03:40:51.484051 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" event={"ID":"4f8c82ec-4188-4cfb-8179-29d123ef6d8d","Type":"ContainerStarted","Data":"0c4322b100352de72231146bc7d93ca7a78b5b96105083070c7884ce768fddb4"} Dec 06 03:40:51 crc kubenswrapper[4801]: I1206 03:40:51.512547 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" podStartSLOduration=1.999115226 podStartE2EDuration="2.512529492s" podCreationTimestamp="2025-12-06 03:40:49 +0000 UTC" firstStartedPulling="2025-12-06 03:40:50.431400592 +0000 UTC m=+2103.554008164" lastFinishedPulling="2025-12-06 03:40:50.944814858 +0000 UTC m=+2104.067422430" observedRunningTime="2025-12-06 03:40:51.506985957 +0000 UTC m=+2104.629593529" watchObservedRunningTime="2025-12-06 03:40:51.512529492 +0000 UTC m=+2104.635137064" Dec 06 03:41:01 crc kubenswrapper[4801]: I1206 03:41:01.566362 4801 generic.go:334] "Generic (PLEG): container finished" podID="4f8c82ec-4188-4cfb-8179-29d123ef6d8d" containerID="0c4322b100352de72231146bc7d93ca7a78b5b96105083070c7884ce768fddb4" exitCode=0 Dec 06 03:41:01 crc kubenswrapper[4801]: I1206 03:41:01.566443 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" event={"ID":"4f8c82ec-4188-4cfb-8179-29d123ef6d8d","Type":"ContainerDied","Data":"0c4322b100352de72231146bc7d93ca7a78b5b96105083070c7884ce768fddb4"} Dec 06 03:41:02 crc kubenswrapper[4801]: I1206 03:41:02.991325 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.057014 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-inventory\") pod \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.057330 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppzsf\" (UniqueName: \"kubernetes.io/projected/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-kube-api-access-ppzsf\") pod \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.057447 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-ssh-key\") pod \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\" (UID: \"4f8c82ec-4188-4cfb-8179-29d123ef6d8d\") " Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.063397 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-kube-api-access-ppzsf" (OuterVolumeSpecName: "kube-api-access-ppzsf") pod "4f8c82ec-4188-4cfb-8179-29d123ef6d8d" (UID: "4f8c82ec-4188-4cfb-8179-29d123ef6d8d"). InnerVolumeSpecName "kube-api-access-ppzsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.083427 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f8c82ec-4188-4cfb-8179-29d123ef6d8d" (UID: "4f8c82ec-4188-4cfb-8179-29d123ef6d8d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.085084 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-inventory" (OuterVolumeSpecName: "inventory") pod "4f8c82ec-4188-4cfb-8179-29d123ef6d8d" (UID: "4f8c82ec-4188-4cfb-8179-29d123ef6d8d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.158973 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppzsf\" (UniqueName: \"kubernetes.io/projected/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-kube-api-access-ppzsf\") on node \"crc\" DevicePath \"\"" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.159010 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.159021 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f8c82ec-4188-4cfb-8179-29d123ef6d8d-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.585182 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" event={"ID":"4f8c82ec-4188-4cfb-8179-29d123ef6d8d","Type":"ContainerDied","Data":"b58b6b62692ae6974fbde4bfc113fecdec0f0505f485669f76cace3bd70d71ed"} Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.585232 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b58b6b62692ae6974fbde4bfc113fecdec0f0505f485669f76cace3bd70d71ed" Dec 06 03:41:03 crc kubenswrapper[4801]: I1206 03:41:03.585248 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs" Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.170132 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.170591 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.170641 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.171294 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e1db6ee027248e2e975e23d49437335bf9e87f64d09bd3a4e738b868ed41a8b"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.171345 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://7e1db6ee027248e2e975e23d49437335bf9e87f64d09bd3a4e738b868ed41a8b" gracePeriod=600 Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.653449 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="7e1db6ee027248e2e975e23d49437335bf9e87f64d09bd3a4e738b868ed41a8b" exitCode=0 Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.653498 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"7e1db6ee027248e2e975e23d49437335bf9e87f64d09bd3a4e738b868ed41a8b"} Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.653918 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74"} Dec 06 03:41:11 crc kubenswrapper[4801]: I1206 03:41:11.653939 4801 scope.go:117] "RemoveContainer" containerID="53d40cd44d81f4ddc9ab76b4c7ecde2e1c19d8833ebbaa7f257d015a42b75f28" Dec 06 03:41:20 crc kubenswrapper[4801]: I1206 03:41:20.211356 4801 scope.go:117] "RemoveContainer" containerID="8d1503562c3bdec19876a33a3799370f551cf64b1bc92a9141c81eb24797df24" Dec 06 03:41:20 crc kubenswrapper[4801]: I1206 03:41:20.260087 4801 scope.go:117] "RemoveContainer" containerID="5d4dbeae13a04a600dd8399caf51523306daeeab0298ecadb5af907c42c87ff8" Dec 06 03:41:28 crc kubenswrapper[4801]: I1206 03:41:28.042096 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-x48bz"] Dec 06 03:41:28 crc kubenswrapper[4801]: I1206 03:41:28.049797 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-x48bz"] Dec 06 03:41:29 crc kubenswrapper[4801]: I1206 03:41:29.226558 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09a2d0dd-f819-4e34-90f5-04c2d7ac63d0" path="/var/lib/kubelet/pods/09a2d0dd-f819-4e34-90f5-04c2d7ac63d0/volumes" Dec 06 03:42:20 crc kubenswrapper[4801]: I1206 03:42:20.338189 4801 scope.go:117] "RemoveContainer" containerID="a28870c1588a4be1495a4c58e352390ee00ab8f982551889d6715a8d6ddabf25" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.479561 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwstx"] Dec 06 03:42:25 crc kubenswrapper[4801]: E1206 03:42:25.480556 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f8c82ec-4188-4cfb-8179-29d123ef6d8d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.480570 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f8c82ec-4188-4cfb-8179-29d123ef6d8d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.480774 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f8c82ec-4188-4cfb-8179-29d123ef6d8d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.482072 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.500777 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwstx"] Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.637897 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-utilities\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.637972 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-catalog-content\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.638339 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wld9w\" (UniqueName: \"kubernetes.io/projected/8a8f842a-19df-43b5-acc1-784f05150c2a-kube-api-access-wld9w\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.739890 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-utilities\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.739940 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-catalog-content\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.740051 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wld9w\" (UniqueName: \"kubernetes.io/projected/8a8f842a-19df-43b5-acc1-784f05150c2a-kube-api-access-wld9w\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.740475 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-utilities\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.740573 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-catalog-content\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.767174 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wld9w\" (UniqueName: \"kubernetes.io/projected/8a8f842a-19df-43b5-acc1-784f05150c2a-kube-api-access-wld9w\") pod \"redhat-marketplace-mwstx\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:25 crc kubenswrapper[4801]: I1206 03:42:25.805943 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:26 crc kubenswrapper[4801]: I1206 03:42:26.281175 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwstx"] Dec 06 03:42:27 crc kubenswrapper[4801]: I1206 03:42:27.272309 4801 generic.go:334] "Generic (PLEG): container finished" podID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerID="b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377" exitCode=0 Dec 06 03:42:27 crc kubenswrapper[4801]: I1206 03:42:27.272425 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerDied","Data":"b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377"} Dec 06 03:42:27 crc kubenswrapper[4801]: I1206 03:42:27.272611 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerStarted","Data":"bebc39b1bec5e839f0979b97835a22a7b20f2cdaf5ad2cecc8980782f1830036"} Dec 06 03:42:28 crc kubenswrapper[4801]: I1206 03:42:28.281655 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerStarted","Data":"a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14"} Dec 06 03:42:29 crc kubenswrapper[4801]: I1206 03:42:29.291184 4801 generic.go:334] "Generic (PLEG): container finished" podID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerID="a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14" exitCode=0 Dec 06 03:42:29 crc kubenswrapper[4801]: I1206 03:42:29.291233 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerDied","Data":"a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14"} Dec 06 03:42:30 crc kubenswrapper[4801]: I1206 03:42:30.300802 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerStarted","Data":"bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c"} Dec 06 03:42:30 crc kubenswrapper[4801]: I1206 03:42:30.323227 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwstx" podStartSLOduration=2.859878118 podStartE2EDuration="5.323206678s" podCreationTimestamp="2025-12-06 03:42:25 +0000 UTC" firstStartedPulling="2025-12-06 03:42:27.276176239 +0000 UTC m=+2200.398783811" lastFinishedPulling="2025-12-06 03:42:29.739504799 +0000 UTC m=+2202.862112371" observedRunningTime="2025-12-06 03:42:30.320504715 +0000 UTC m=+2203.443112297" watchObservedRunningTime="2025-12-06 03:42:30.323206678 +0000 UTC m=+2203.445814250" Dec 06 03:42:35 crc kubenswrapper[4801]: I1206 03:42:35.806710 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:35 crc kubenswrapper[4801]: I1206 03:42:35.807302 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:35 crc kubenswrapper[4801]: I1206 03:42:35.850034 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:36 crc kubenswrapper[4801]: I1206 03:42:36.402361 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:36 crc kubenswrapper[4801]: I1206 03:42:36.472214 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwstx"] Dec 06 03:42:38 crc kubenswrapper[4801]: I1206 03:42:38.368088 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mwstx" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="registry-server" containerID="cri-o://bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c" gracePeriod=2 Dec 06 03:42:38 crc kubenswrapper[4801]: I1206 03:42:38.939593 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.089063 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wld9w\" (UniqueName: \"kubernetes.io/projected/8a8f842a-19df-43b5-acc1-784f05150c2a-kube-api-access-wld9w\") pod \"8a8f842a-19df-43b5-acc1-784f05150c2a\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.089133 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-catalog-content\") pod \"8a8f842a-19df-43b5-acc1-784f05150c2a\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.089177 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-utilities\") pod \"8a8f842a-19df-43b5-acc1-784f05150c2a\" (UID: \"8a8f842a-19df-43b5-acc1-784f05150c2a\") " Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.090542 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-utilities" (OuterVolumeSpecName: "utilities") pod "8a8f842a-19df-43b5-acc1-784f05150c2a" (UID: "8a8f842a-19df-43b5-acc1-784f05150c2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.101135 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a8f842a-19df-43b5-acc1-784f05150c2a-kube-api-access-wld9w" (OuterVolumeSpecName: "kube-api-access-wld9w") pod "8a8f842a-19df-43b5-acc1-784f05150c2a" (UID: "8a8f842a-19df-43b5-acc1-784f05150c2a"). InnerVolumeSpecName "kube-api-access-wld9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.116170 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a8f842a-19df-43b5-acc1-784f05150c2a" (UID: "8a8f842a-19df-43b5-acc1-784f05150c2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.192603 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wld9w\" (UniqueName: \"kubernetes.io/projected/8a8f842a-19df-43b5-acc1-784f05150c2a-kube-api-access-wld9w\") on node \"crc\" DevicePath \"\"" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.192688 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.192710 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a8f842a-19df-43b5-acc1-784f05150c2a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.385844 4801 generic.go:334] "Generic (PLEG): container finished" podID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerID="bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c" exitCode=0 Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.385905 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerDied","Data":"bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c"} Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.385956 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwstx" event={"ID":"8a8f842a-19df-43b5-acc1-784f05150c2a","Type":"ContainerDied","Data":"bebc39b1bec5e839f0979b97835a22a7b20f2cdaf5ad2cecc8980782f1830036"} Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.385963 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwstx" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.385981 4801 scope.go:117] "RemoveContainer" containerID="bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.418340 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwstx"] Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.420366 4801 scope.go:117] "RemoveContainer" containerID="a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.426249 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwstx"] Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.465957 4801 scope.go:117] "RemoveContainer" containerID="b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.500826 4801 scope.go:117] "RemoveContainer" containerID="bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c" Dec 06 03:42:39 crc kubenswrapper[4801]: E1206 03:42:39.501497 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c\": container with ID starting with bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c not found: ID does not exist" containerID="bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.501534 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c"} err="failed to get container status \"bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c\": rpc error: code = NotFound desc = could not find container \"bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c\": container with ID starting with bf8ee6457de6339d15ef8df326a3d463243bfe8cdfe57e7b6b7eb9eb23a7479c not found: ID does not exist" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.501557 4801 scope.go:117] "RemoveContainer" containerID="a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14" Dec 06 03:42:39 crc kubenswrapper[4801]: E1206 03:42:39.502221 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14\": container with ID starting with a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14 not found: ID does not exist" containerID="a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.502260 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14"} err="failed to get container status \"a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14\": rpc error: code = NotFound desc = could not find container \"a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14\": container with ID starting with a65b19ee89101a87f416f62538c05df16376453ccbadd722afc721caa68a7c14 not found: ID does not exist" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.502289 4801 scope.go:117] "RemoveContainer" containerID="b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377" Dec 06 03:42:39 crc kubenswrapper[4801]: E1206 03:42:39.502817 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377\": container with ID starting with b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377 not found: ID does not exist" containerID="b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377" Dec 06 03:42:39 crc kubenswrapper[4801]: I1206 03:42:39.502849 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377"} err="failed to get container status \"b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377\": rpc error: code = NotFound desc = could not find container \"b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377\": container with ID starting with b54c21d719f2db21551dc51099a0f459c9e800f78a80a28f87b8b0d58fd0d377 not found: ID does not exist" Dec 06 03:42:41 crc kubenswrapper[4801]: I1206 03:42:41.227986 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" path="/var/lib/kubelet/pods/8a8f842a-19df-43b5-acc1-784f05150c2a/volumes" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.082947 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jkcls"] Dec 06 03:42:42 crc kubenswrapper[4801]: E1206 03:42:42.083438 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="extract-utilities" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.083460 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="extract-utilities" Dec 06 03:42:42 crc kubenswrapper[4801]: E1206 03:42:42.083482 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="registry-server" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.083490 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="registry-server" Dec 06 03:42:42 crc kubenswrapper[4801]: E1206 03:42:42.083529 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="extract-content" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.083539 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="extract-content" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.083785 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a8f842a-19df-43b5-acc1-784f05150c2a" containerName="registry-server" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.085404 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.101143 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkcls"] Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.251498 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-utilities\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.272382 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76pg8\" (UniqueName: \"kubernetes.io/projected/79d88e54-04b4-4872-b364-7d9869ffd023-kube-api-access-76pg8\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.273013 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-catalog-content\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.374288 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-utilities\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.374334 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76pg8\" (UniqueName: \"kubernetes.io/projected/79d88e54-04b4-4872-b364-7d9869ffd023-kube-api-access-76pg8\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.374508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-catalog-content\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.374887 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-utilities\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.376067 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-catalog-content\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.393461 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76pg8\" (UniqueName: \"kubernetes.io/projected/79d88e54-04b4-4872-b364-7d9869ffd023-kube-api-access-76pg8\") pod \"community-operators-jkcls\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.410795 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:42 crc kubenswrapper[4801]: I1206 03:42:42.967231 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkcls"] Dec 06 03:42:42 crc kubenswrapper[4801]: W1206 03:42:42.985037 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79d88e54_04b4_4872_b364_7d9869ffd023.slice/crio-063ff4d42ed16cc68ab24020de3f80c22d5b9fd233f6e50352ee98f4e96809eb WatchSource:0}: Error finding container 063ff4d42ed16cc68ab24020de3f80c22d5b9fd233f6e50352ee98f4e96809eb: Status 404 returned error can't find the container with id 063ff4d42ed16cc68ab24020de3f80c22d5b9fd233f6e50352ee98f4e96809eb Dec 06 03:42:43 crc kubenswrapper[4801]: I1206 03:42:43.417414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerStarted","Data":"063ff4d42ed16cc68ab24020de3f80c22d5b9fd233f6e50352ee98f4e96809eb"} Dec 06 03:42:44 crc kubenswrapper[4801]: I1206 03:42:44.429473 4801 generic.go:334] "Generic (PLEG): container finished" podID="79d88e54-04b4-4872-b364-7d9869ffd023" containerID="5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137" exitCode=0 Dec 06 03:42:44 crc kubenswrapper[4801]: I1206 03:42:44.429572 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerDied","Data":"5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137"} Dec 06 03:42:45 crc kubenswrapper[4801]: I1206 03:42:45.443155 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerStarted","Data":"9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b"} Dec 06 03:42:46 crc kubenswrapper[4801]: I1206 03:42:46.455155 4801 generic.go:334] "Generic (PLEG): container finished" podID="79d88e54-04b4-4872-b364-7d9869ffd023" containerID="9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b" exitCode=0 Dec 06 03:42:46 crc kubenswrapper[4801]: I1206 03:42:46.455215 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerDied","Data":"9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b"} Dec 06 03:42:47 crc kubenswrapper[4801]: I1206 03:42:47.463057 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerStarted","Data":"9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894"} Dec 06 03:42:47 crc kubenswrapper[4801]: I1206 03:42:47.485537 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jkcls" podStartSLOduration=2.954215277 podStartE2EDuration="5.48551503s" podCreationTimestamp="2025-12-06 03:42:42 +0000 UTC" firstStartedPulling="2025-12-06 03:42:44.432839619 +0000 UTC m=+2217.555447191" lastFinishedPulling="2025-12-06 03:42:46.964139372 +0000 UTC m=+2220.086746944" observedRunningTime="2025-12-06 03:42:47.478400887 +0000 UTC m=+2220.601008489" watchObservedRunningTime="2025-12-06 03:42:47.48551503 +0000 UTC m=+2220.608122602" Dec 06 03:42:52 crc kubenswrapper[4801]: I1206 03:42:52.412053 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:52 crc kubenswrapper[4801]: I1206 03:42:52.412582 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:52 crc kubenswrapper[4801]: I1206 03:42:52.461506 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:52 crc kubenswrapper[4801]: I1206 03:42:52.554742 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:52 crc kubenswrapper[4801]: I1206 03:42:52.698231 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkcls"] Dec 06 03:42:54 crc kubenswrapper[4801]: I1206 03:42:54.522000 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jkcls" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="registry-server" containerID="cri-o://9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894" gracePeriod=2 Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.071122 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.227599 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76pg8\" (UniqueName: \"kubernetes.io/projected/79d88e54-04b4-4872-b364-7d9869ffd023-kube-api-access-76pg8\") pod \"79d88e54-04b4-4872-b364-7d9869ffd023\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.227686 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-utilities\") pod \"79d88e54-04b4-4872-b364-7d9869ffd023\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.227799 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-catalog-content\") pod \"79d88e54-04b4-4872-b364-7d9869ffd023\" (UID: \"79d88e54-04b4-4872-b364-7d9869ffd023\") " Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.228539 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-utilities" (OuterVolumeSpecName: "utilities") pod "79d88e54-04b4-4872-b364-7d9869ffd023" (UID: "79d88e54-04b4-4872-b364-7d9869ffd023"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.234479 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d88e54-04b4-4872-b364-7d9869ffd023-kube-api-access-76pg8" (OuterVolumeSpecName: "kube-api-access-76pg8") pod "79d88e54-04b4-4872-b364-7d9869ffd023" (UID: "79d88e54-04b4-4872-b364-7d9869ffd023"). InnerVolumeSpecName "kube-api-access-76pg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.286264 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79d88e54-04b4-4872-b364-7d9869ffd023" (UID: "79d88e54-04b4-4872-b364-7d9869ffd023"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.330253 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76pg8\" (UniqueName: \"kubernetes.io/projected/79d88e54-04b4-4872-b364-7d9869ffd023-kube-api-access-76pg8\") on node \"crc\" DevicePath \"\"" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.330292 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.330307 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79d88e54-04b4-4872-b364-7d9869ffd023-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.532259 4801 generic.go:334] "Generic (PLEG): container finished" podID="79d88e54-04b4-4872-b364-7d9869ffd023" containerID="9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894" exitCode=0 Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.532310 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerDied","Data":"9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894"} Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.532335 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkcls" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.532358 4801 scope.go:117] "RemoveContainer" containerID="9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.532344 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkcls" event={"ID":"79d88e54-04b4-4872-b364-7d9869ffd023","Type":"ContainerDied","Data":"063ff4d42ed16cc68ab24020de3f80c22d5b9fd233f6e50352ee98f4e96809eb"} Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.552510 4801 scope.go:117] "RemoveContainer" containerID="9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.577102 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkcls"] Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.591963 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jkcls"] Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.603364 4801 scope.go:117] "RemoveContainer" containerID="5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.637549 4801 scope.go:117] "RemoveContainer" containerID="9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894" Dec 06 03:42:55 crc kubenswrapper[4801]: E1206 03:42:55.638001 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894\": container with ID starting with 9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894 not found: ID does not exist" containerID="9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.638064 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894"} err="failed to get container status \"9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894\": rpc error: code = NotFound desc = could not find container \"9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894\": container with ID starting with 9c62373d3f90a7367c601d985bc066b556458794e3b799511ea9ac60f148c894 not found: ID does not exist" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.638093 4801 scope.go:117] "RemoveContainer" containerID="9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b" Dec 06 03:42:55 crc kubenswrapper[4801]: E1206 03:42:55.638825 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b\": container with ID starting with 9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b not found: ID does not exist" containerID="9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.638882 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b"} err="failed to get container status \"9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b\": rpc error: code = NotFound desc = could not find container \"9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b\": container with ID starting with 9e404d46cf702f16eb665fbbfc7046598e767bf605714969bd7dc29d8207b13b not found: ID does not exist" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.638915 4801 scope.go:117] "RemoveContainer" containerID="5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137" Dec 06 03:42:55 crc kubenswrapper[4801]: E1206 03:42:55.639276 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137\": container with ID starting with 5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137 not found: ID does not exist" containerID="5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137" Dec 06 03:42:55 crc kubenswrapper[4801]: I1206 03:42:55.639299 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137"} err="failed to get container status \"5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137\": rpc error: code = NotFound desc = could not find container \"5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137\": container with ID starting with 5dd8040190330a9c3b7b850e17b2b986aed25ef053204ad5b4a20000ba51e137 not found: ID does not exist" Dec 06 03:42:57 crc kubenswrapper[4801]: I1206 03:42:57.223865 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" path="/var/lib/kubelet/pods/79d88e54-04b4-4872-b364-7d9869ffd023/volumes" Dec 06 03:43:11 crc kubenswrapper[4801]: I1206 03:43:11.170169 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:43:11 crc kubenswrapper[4801]: I1206 03:43:11.170819 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:43:41 crc kubenswrapper[4801]: I1206 03:43:41.170046 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:43:41 crc kubenswrapper[4801]: I1206 03:43:41.170561 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:44:11 crc kubenswrapper[4801]: I1206 03:44:11.169537 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:44:11 crc kubenswrapper[4801]: I1206 03:44:11.170160 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:44:11 crc kubenswrapper[4801]: I1206 03:44:11.170212 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:44:11 crc kubenswrapper[4801]: I1206 03:44:11.170922 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:44:11 crc kubenswrapper[4801]: I1206 03:44:11.170972 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" gracePeriod=600 Dec 06 03:44:11 crc kubenswrapper[4801]: E1206 03:44:11.288390 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:44:12 crc kubenswrapper[4801]: I1206 03:44:12.161351 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" exitCode=0 Dec 06 03:44:12 crc kubenswrapper[4801]: I1206 03:44:12.161433 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74"} Dec 06 03:44:12 crc kubenswrapper[4801]: I1206 03:44:12.161789 4801 scope.go:117] "RemoveContainer" containerID="7e1db6ee027248e2e975e23d49437335bf9e87f64d09bd3a4e738b868ed41a8b" Dec 06 03:44:12 crc kubenswrapper[4801]: I1206 03:44:12.162368 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:44:12 crc kubenswrapper[4801]: E1206 03:44:12.162617 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:44:26 crc kubenswrapper[4801]: I1206 03:44:26.213030 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:44:26 crc kubenswrapper[4801]: E1206 03:44:26.214035 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:44:39 crc kubenswrapper[4801]: I1206 03:44:39.212186 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:44:39 crc kubenswrapper[4801]: E1206 03:44:39.212883 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:44:53 crc kubenswrapper[4801]: I1206 03:44:53.212536 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:44:53 crc kubenswrapper[4801]: E1206 03:44:53.213378 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.140907 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb"] Dec 06 03:45:00 crc kubenswrapper[4801]: E1206 03:45:00.141936 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="registry-server" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.141953 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="registry-server" Dec 06 03:45:00 crc kubenswrapper[4801]: E1206 03:45:00.141963 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="extract-utilities" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.141969 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="extract-utilities" Dec 06 03:45:00 crc kubenswrapper[4801]: E1206 03:45:00.141988 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="extract-content" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.141995 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="extract-content" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.142214 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d88e54-04b4-4872-b364-7d9869ffd023" containerName="registry-server" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.142800 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.144720 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.145084 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.152725 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb"] Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.340337 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c138298f-7ed9-4198-bb9c-f3e37aac4834-config-volume\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.340400 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjh7\" (UniqueName: \"kubernetes.io/projected/c138298f-7ed9-4198-bb9c-f3e37aac4834-kube-api-access-qtjh7\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.340430 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c138298f-7ed9-4198-bb9c-f3e37aac4834-secret-volume\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.441956 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c138298f-7ed9-4198-bb9c-f3e37aac4834-config-volume\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.442030 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjh7\" (UniqueName: \"kubernetes.io/projected/c138298f-7ed9-4198-bb9c-f3e37aac4834-kube-api-access-qtjh7\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.442102 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c138298f-7ed9-4198-bb9c-f3e37aac4834-secret-volume\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.442946 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c138298f-7ed9-4198-bb9c-f3e37aac4834-config-volume\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.448292 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c138298f-7ed9-4198-bb9c-f3e37aac4834-secret-volume\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.457969 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjh7\" (UniqueName: \"kubernetes.io/projected/c138298f-7ed9-4198-bb9c-f3e37aac4834-kube-api-access-qtjh7\") pod \"collect-profiles-29416545-4k8cb\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.476642 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:00 crc kubenswrapper[4801]: I1206 03:45:00.901772 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb"] Dec 06 03:45:01 crc kubenswrapper[4801]: I1206 03:45:01.580780 4801 generic.go:334] "Generic (PLEG): container finished" podID="c138298f-7ed9-4198-bb9c-f3e37aac4834" containerID="b6566ace9484d254b36f2291346d1cc13cad296d352fdc301bdf352a5edb657c" exitCode=0 Dec 06 03:45:01 crc kubenswrapper[4801]: I1206 03:45:01.580899 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" event={"ID":"c138298f-7ed9-4198-bb9c-f3e37aac4834","Type":"ContainerDied","Data":"b6566ace9484d254b36f2291346d1cc13cad296d352fdc301bdf352a5edb657c"} Dec 06 03:45:01 crc kubenswrapper[4801]: I1206 03:45:01.581196 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" event={"ID":"c138298f-7ed9-4198-bb9c-f3e37aac4834","Type":"ContainerStarted","Data":"dfcc8d1d02bb62152534318c37354c27f8b42067af6a2f9a0e3edecc83960fc1"} Dec 06 03:45:02 crc kubenswrapper[4801]: I1206 03:45:02.943512 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.105846 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtjh7\" (UniqueName: \"kubernetes.io/projected/c138298f-7ed9-4198-bb9c-f3e37aac4834-kube-api-access-qtjh7\") pod \"c138298f-7ed9-4198-bb9c-f3e37aac4834\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.105913 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c138298f-7ed9-4198-bb9c-f3e37aac4834-config-volume\") pod \"c138298f-7ed9-4198-bb9c-f3e37aac4834\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.105992 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c138298f-7ed9-4198-bb9c-f3e37aac4834-secret-volume\") pod \"c138298f-7ed9-4198-bb9c-f3e37aac4834\" (UID: \"c138298f-7ed9-4198-bb9c-f3e37aac4834\") " Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.106672 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c138298f-7ed9-4198-bb9c-f3e37aac4834-config-volume" (OuterVolumeSpecName: "config-volume") pod "c138298f-7ed9-4198-bb9c-f3e37aac4834" (UID: "c138298f-7ed9-4198-bb9c-f3e37aac4834"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.107225 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c138298f-7ed9-4198-bb9c-f3e37aac4834-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.112091 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c138298f-7ed9-4198-bb9c-f3e37aac4834-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c138298f-7ed9-4198-bb9c-f3e37aac4834" (UID: "c138298f-7ed9-4198-bb9c-f3e37aac4834"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.127367 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c138298f-7ed9-4198-bb9c-f3e37aac4834-kube-api-access-qtjh7" (OuterVolumeSpecName: "kube-api-access-qtjh7") pod "c138298f-7ed9-4198-bb9c-f3e37aac4834" (UID: "c138298f-7ed9-4198-bb9c-f3e37aac4834"). InnerVolumeSpecName "kube-api-access-qtjh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.208678 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c138298f-7ed9-4198-bb9c-f3e37aac4834-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.208716 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtjh7\" (UniqueName: \"kubernetes.io/projected/c138298f-7ed9-4198-bb9c-f3e37aac4834-kube-api-access-qtjh7\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.599656 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" event={"ID":"c138298f-7ed9-4198-bb9c-f3e37aac4834","Type":"ContainerDied","Data":"dfcc8d1d02bb62152534318c37354c27f8b42067af6a2f9a0e3edecc83960fc1"} Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.599984 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcc8d1d02bb62152534318c37354c27f8b42067af6a2f9a0e3edecc83960fc1" Dec 06 03:45:03 crc kubenswrapper[4801]: I1206 03:45:03.600049 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb" Dec 06 03:45:04 crc kubenswrapper[4801]: I1206 03:45:04.029649 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28"] Dec 06 03:45:04 crc kubenswrapper[4801]: I1206 03:45:04.037653 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416500-jps28"] Dec 06 03:45:04 crc kubenswrapper[4801]: I1206 03:45:04.213142 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:45:04 crc kubenswrapper[4801]: E1206 03:45:04.213362 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:45:05 crc kubenswrapper[4801]: I1206 03:45:05.222677 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="630057a4-ba0a-485b-8ac1-0113c42a9fe5" path="/var/lib/kubelet/pods/630057a4-ba0a-485b-8ac1-0113c42a9fe5/volumes" Dec 06 03:45:19 crc kubenswrapper[4801]: I1206 03:45:19.212548 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:45:19 crc kubenswrapper[4801]: E1206 03:45:19.213345 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:45:20 crc kubenswrapper[4801]: I1206 03:45:20.499504 4801 scope.go:117] "RemoveContainer" containerID="57258c0fda32402a7ebb53442871e84392e38f23597f494fa267739d36a616b9" Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.949730 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb"] Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.959161 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz"] Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.967999 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-6lkvb"] Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.975654 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg"] Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.983084 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r"] Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.989455 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr"] Dec 06 03:45:27 crc kubenswrapper[4801]: I1206 03:45:27.996776 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6qpjz"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.004080 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-kwq8r"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.011852 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.019121 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4552d"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.025592 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.031565 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.038644 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.045801 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-99vwg"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.052825 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8j2sr"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.059694 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-w796s"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.065737 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-xbxzs"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.071902 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4552d"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.077709 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fj2q8"] Dec 06 03:45:28 crc kubenswrapper[4801]: I1206 03:45:28.083453 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-vd2f6"] Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.225860 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0813e3d8-6857-4d8d-83ef-b43c1ff774e1" path="/var/lib/kubelet/pods/0813e3d8-6857-4d8d-83ef-b43c1ff774e1/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.226403 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f8c82ec-4188-4cfb-8179-29d123ef6d8d" path="/var/lib/kubelet/pods/4f8c82ec-4188-4cfb-8179-29d123ef6d8d/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.226913 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56777df7-ed53-4b2c-af02-b24ce707927e" path="/var/lib/kubelet/pods/56777df7-ed53-4b2c-af02-b24ce707927e/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.227537 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94086867-d5b4-4c97-9f39-2df6a18bd4b7" path="/var/lib/kubelet/pods/94086867-d5b4-4c97-9f39-2df6a18bd4b7/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.228618 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bff6d99-5c8c-421c-9f06-e24e58c59492" path="/var/lib/kubelet/pods/9bff6d99-5c8c-421c-9f06-e24e58c59492/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.229324 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d90de88-52ef-4cbf-a0e3-1b31a853cbf6" path="/var/lib/kubelet/pods/9d90de88-52ef-4cbf-a0e3-1b31a853cbf6/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.230017 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4cd69b3-737d-4293-bbe0-426a284b5c3b" path="/var/lib/kubelet/pods/c4cd69b3-737d-4293-bbe0-426a284b5c3b/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.231108 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caca002a-afc5-45e5-9400-3f8ba6b0978a" path="/var/lib/kubelet/pods/caca002a-afc5-45e5-9400-3f8ba6b0978a/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.231651 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d80971b5-fe84-4ad9-a5db-75a00e17f031" path="/var/lib/kubelet/pods/d80971b5-fe84-4ad9-a5db-75a00e17f031/volumes" Dec 06 03:45:29 crc kubenswrapper[4801]: I1206 03:45:29.232196 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9cbfc7a-8123-4bf7-bebc-fb50674cf566" path="/var/lib/kubelet/pods/f9cbfc7a-8123-4bf7-bebc-fb50674cf566/volumes" Dec 06 03:45:33 crc kubenswrapper[4801]: I1206 03:45:33.213176 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:45:33 crc kubenswrapper[4801]: E1206 03:45:33.214393 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.037094 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66"] Dec 06 03:45:34 crc kubenswrapper[4801]: E1206 03:45:34.040615 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c138298f-7ed9-4198-bb9c-f3e37aac4834" containerName="collect-profiles" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.040653 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c138298f-7ed9-4198-bb9c-f3e37aac4834" containerName="collect-profiles" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.040967 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c138298f-7ed9-4198-bb9c-f3e37aac4834" containerName="collect-profiles" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.042358 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.049864 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.049918 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66"] Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.050119 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.050145 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.050378 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.050550 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.161285 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.161327 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.161526 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.161572 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4kx\" (UniqueName: \"kubernetes.io/projected/17c94e6a-0e75-45a9-a11d-31eae796de72-kube-api-access-2c4kx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.161622 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.262849 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c4kx\" (UniqueName: \"kubernetes.io/projected/17c94e6a-0e75-45a9-a11d-31eae796de72-kube-api-access-2c4kx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.263422 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.263558 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.263664 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.264467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.270143 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.270165 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.272381 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.272477 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.279040 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c4kx\" (UniqueName: \"kubernetes.io/projected/17c94e6a-0e75-45a9-a11d-31eae796de72-kube-api-access-2c4kx\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.367187 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.854284 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66"] Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.859828 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:45:34 crc kubenswrapper[4801]: I1206 03:45:34.874460 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" event={"ID":"17c94e6a-0e75-45a9-a11d-31eae796de72","Type":"ContainerStarted","Data":"8ae8b68e9a77a76394e0f8d7674a02539b30c2cdae564c9d771c51867c913314"} Dec 06 03:45:36 crc kubenswrapper[4801]: I1206 03:45:36.889808 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" event={"ID":"17c94e6a-0e75-45a9-a11d-31eae796de72","Type":"ContainerStarted","Data":"92c1d15228415a9c7d5c12d6365192365eea0769566f24fdb4c01b4019916427"} Dec 06 03:45:36 crc kubenswrapper[4801]: I1206 03:45:36.908638 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" podStartSLOduration=2.435566638 podStartE2EDuration="2.908619405s" podCreationTimestamp="2025-12-06 03:45:34 +0000 UTC" firstStartedPulling="2025-12-06 03:45:34.85958638 +0000 UTC m=+2387.982193952" lastFinishedPulling="2025-12-06 03:45:35.332639147 +0000 UTC m=+2388.455246719" observedRunningTime="2025-12-06 03:45:36.907836144 +0000 UTC m=+2390.030443716" watchObservedRunningTime="2025-12-06 03:45:36.908619405 +0000 UTC m=+2390.031226977" Dec 06 03:45:44 crc kubenswrapper[4801]: I1206 03:45:44.212301 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:45:44 crc kubenswrapper[4801]: E1206 03:45:44.213155 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:45:48 crc kubenswrapper[4801]: I1206 03:45:48.979348 4801 generic.go:334] "Generic (PLEG): container finished" podID="17c94e6a-0e75-45a9-a11d-31eae796de72" containerID="92c1d15228415a9c7d5c12d6365192365eea0769566f24fdb4c01b4019916427" exitCode=0 Dec 06 03:45:48 crc kubenswrapper[4801]: I1206 03:45:48.979464 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" event={"ID":"17c94e6a-0e75-45a9-a11d-31eae796de72","Type":"ContainerDied","Data":"92c1d15228415a9c7d5c12d6365192365eea0769566f24fdb4c01b4019916427"} Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.369270 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.446664 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ceph\") pod \"17c94e6a-0e75-45a9-a11d-31eae796de72\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.446742 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-inventory\") pod \"17c94e6a-0e75-45a9-a11d-31eae796de72\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.446788 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ssh-key\") pod \"17c94e6a-0e75-45a9-a11d-31eae796de72\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.446860 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-repo-setup-combined-ca-bundle\") pod \"17c94e6a-0e75-45a9-a11d-31eae796de72\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.446978 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c4kx\" (UniqueName: \"kubernetes.io/projected/17c94e6a-0e75-45a9-a11d-31eae796de72-kube-api-access-2c4kx\") pod \"17c94e6a-0e75-45a9-a11d-31eae796de72\" (UID: \"17c94e6a-0e75-45a9-a11d-31eae796de72\") " Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.452664 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ceph" (OuterVolumeSpecName: "ceph") pod "17c94e6a-0e75-45a9-a11d-31eae796de72" (UID: "17c94e6a-0e75-45a9-a11d-31eae796de72"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.452700 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "17c94e6a-0e75-45a9-a11d-31eae796de72" (UID: "17c94e6a-0e75-45a9-a11d-31eae796de72"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.453343 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c94e6a-0e75-45a9-a11d-31eae796de72-kube-api-access-2c4kx" (OuterVolumeSpecName: "kube-api-access-2c4kx") pod "17c94e6a-0e75-45a9-a11d-31eae796de72" (UID: "17c94e6a-0e75-45a9-a11d-31eae796de72"). InnerVolumeSpecName "kube-api-access-2c4kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.473304 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17c94e6a-0e75-45a9-a11d-31eae796de72" (UID: "17c94e6a-0e75-45a9-a11d-31eae796de72"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.474017 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-inventory" (OuterVolumeSpecName: "inventory") pod "17c94e6a-0e75-45a9-a11d-31eae796de72" (UID: "17c94e6a-0e75-45a9-a11d-31eae796de72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.549238 4801 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.549275 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c4kx\" (UniqueName: \"kubernetes.io/projected/17c94e6a-0e75-45a9-a11d-31eae796de72-kube-api-access-2c4kx\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.549285 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.549296 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.549304 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17c94e6a-0e75-45a9-a11d-31eae796de72-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.996341 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" event={"ID":"17c94e6a-0e75-45a9-a11d-31eae796de72","Type":"ContainerDied","Data":"8ae8b68e9a77a76394e0f8d7674a02539b30c2cdae564c9d771c51867c913314"} Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.996412 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae8b68e9a77a76394e0f8d7674a02539b30c2cdae564c9d771c51867c913314" Dec 06 03:45:50 crc kubenswrapper[4801]: I1206 03:45:50.996355 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.080426 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9"] Dec 06 03:45:51 crc kubenswrapper[4801]: E1206 03:45:51.080876 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17c94e6a-0e75-45a9-a11d-31eae796de72" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.080895 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="17c94e6a-0e75-45a9-a11d-31eae796de72" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.081106 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="17c94e6a-0e75-45a9-a11d-31eae796de72" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.081770 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.083670 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.084205 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.084399 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.084636 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.084953 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.099132 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9"] Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.167460 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.167548 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.167680 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.167738 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.167783 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7k8r\" (UniqueName: \"kubernetes.io/projected/55bc4fed-30e6-430f-a4d8-6be830c1f268-kube-api-access-d7k8r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.269616 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.269840 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.269903 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.269937 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7k8r\" (UniqueName: \"kubernetes.io/projected/55bc4fed-30e6-430f-a4d8-6be830c1f268-kube-api-access-d7k8r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.269981 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.275227 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.275610 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.275715 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.284405 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.288162 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7k8r\" (UniqueName: \"kubernetes.io/projected/55bc4fed-30e6-430f-a4d8-6be830c1f268-kube-api-access-d7k8r\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.400935 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:45:51 crc kubenswrapper[4801]: I1206 03:45:51.979869 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9"] Dec 06 03:45:52 crc kubenswrapper[4801]: I1206 03:45:52.005503 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" event={"ID":"55bc4fed-30e6-430f-a4d8-6be830c1f268","Type":"ContainerStarted","Data":"d70a817cb5e41f63ede07d3e40bb3bfaa8dc1976edce069adcfd6c34e225ad37"} Dec 06 03:45:53 crc kubenswrapper[4801]: I1206 03:45:53.013460 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" event={"ID":"55bc4fed-30e6-430f-a4d8-6be830c1f268","Type":"ContainerStarted","Data":"791110ce3b3643be9778a39e7bd17de475c5d8da70c036f1a58cb8446a9d428d"} Dec 06 03:45:53 crc kubenswrapper[4801]: I1206 03:45:53.037101 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" podStartSLOduration=1.369830668 podStartE2EDuration="2.037077567s" podCreationTimestamp="2025-12-06 03:45:51 +0000 UTC" firstStartedPulling="2025-12-06 03:45:51.973399456 +0000 UTC m=+2405.096007028" lastFinishedPulling="2025-12-06 03:45:52.640646355 +0000 UTC m=+2405.763253927" observedRunningTime="2025-12-06 03:45:53.029472103 +0000 UTC m=+2406.152079675" watchObservedRunningTime="2025-12-06 03:45:53.037077567 +0000 UTC m=+2406.159685139" Dec 06 03:45:55 crc kubenswrapper[4801]: I1206 03:45:55.212950 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:45:55 crc kubenswrapper[4801]: E1206 03:45:55.214389 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:46:07 crc kubenswrapper[4801]: I1206 03:46:07.217153 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:46:07 crc kubenswrapper[4801]: E1206 03:46:07.218122 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:46:19 crc kubenswrapper[4801]: I1206 03:46:19.211794 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:46:19 crc kubenswrapper[4801]: E1206 03:46:19.212539 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.555682 4801 scope.go:117] "RemoveContainer" containerID="b7135eb9bc3c21279a8b453c072fe93d13e0aa3ea7d6718699406d71a97fca79" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.586118 4801 scope.go:117] "RemoveContainer" containerID="d55220de087ed10413e0eae681cc376041d70b8b7ed3000153c1e4ba2b927ed0" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.647856 4801 scope.go:117] "RemoveContainer" containerID="c93c85120d18997bdcbaa8ba9d621a6c18adc1af51101d292fbf2f2641dcdc66" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.702490 4801 scope.go:117] "RemoveContainer" containerID="e592898de4c814a905126127c7c946e0ee478b696ac268e2e3a07d184ec54ba8" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.747625 4801 scope.go:117] "RemoveContainer" containerID="8ecec5e2acc1c36e977ecb87ddbf62cf6b2efc24c1854efaec2b5f128882562f" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.772967 4801 scope.go:117] "RemoveContainer" containerID="fa3f2cc10a3718a2070c60bb1f86031dab77f0087958bf7f8603ba75e0deabc4" Dec 06 03:46:20 crc kubenswrapper[4801]: I1206 03:46:20.856743 4801 scope.go:117] "RemoveContainer" containerID="c97bc66664cc4b0c33084dfc8c91068c2a486663a133764597c58b04c80d4b75" Dec 06 03:46:34 crc kubenswrapper[4801]: I1206 03:46:34.213821 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:46:34 crc kubenswrapper[4801]: E1206 03:46:34.215121 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:46:49 crc kubenswrapper[4801]: I1206 03:46:49.212146 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:46:49 crc kubenswrapper[4801]: E1206 03:46:49.212917 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:47:03 crc kubenswrapper[4801]: I1206 03:47:03.213734 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:47:03 crc kubenswrapper[4801]: E1206 03:47:03.214410 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:47:18 crc kubenswrapper[4801]: I1206 03:47:18.212824 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:47:18 crc kubenswrapper[4801]: E1206 03:47:18.213576 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:47:21 crc kubenswrapper[4801]: I1206 03:47:21.022096 4801 scope.go:117] "RemoveContainer" containerID="2c13e5134dec0e2a0f60a3aec3d87b345c6fbea18bdb5c564cfcbb5ab1f4eaa8" Dec 06 03:47:21 crc kubenswrapper[4801]: I1206 03:47:21.066475 4801 scope.go:117] "RemoveContainer" containerID="0c4322b100352de72231146bc7d93ca7a78b5b96105083070c7884ce768fddb4" Dec 06 03:47:21 crc kubenswrapper[4801]: I1206 03:47:21.112694 4801 scope.go:117] "RemoveContainer" containerID="9fd057034ed0a44b614e19d79bf52cd42ea98b353360c05f33c350d4320d9910" Dec 06 03:47:31 crc kubenswrapper[4801]: I1206 03:47:31.212234 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:47:31 crc kubenswrapper[4801]: E1206 03:47:31.212862 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:47:31 crc kubenswrapper[4801]: I1206 03:47:31.869183 4801 generic.go:334] "Generic (PLEG): container finished" podID="55bc4fed-30e6-430f-a4d8-6be830c1f268" containerID="791110ce3b3643be9778a39e7bd17de475c5d8da70c036f1a58cb8446a9d428d" exitCode=0 Dec 06 03:47:31 crc kubenswrapper[4801]: I1206 03:47:31.869282 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" event={"ID":"55bc4fed-30e6-430f-a4d8-6be830c1f268","Type":"ContainerDied","Data":"791110ce3b3643be9778a39e7bd17de475c5d8da70c036f1a58cb8446a9d428d"} Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.328807 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.410254 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ssh-key\") pod \"55bc4fed-30e6-430f-a4d8-6be830c1f268\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.410446 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory\") pod \"55bc4fed-30e6-430f-a4d8-6be830c1f268\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.410548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ceph\") pod \"55bc4fed-30e6-430f-a4d8-6be830c1f268\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.410640 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7k8r\" (UniqueName: \"kubernetes.io/projected/55bc4fed-30e6-430f-a4d8-6be830c1f268-kube-api-access-d7k8r\") pod \"55bc4fed-30e6-430f-a4d8-6be830c1f268\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.410693 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-bootstrap-combined-ca-bundle\") pod \"55bc4fed-30e6-430f-a4d8-6be830c1f268\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.417701 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ceph" (OuterVolumeSpecName: "ceph") pod "55bc4fed-30e6-430f-a4d8-6be830c1f268" (UID: "55bc4fed-30e6-430f-a4d8-6be830c1f268"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.419454 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "55bc4fed-30e6-430f-a4d8-6be830c1f268" (UID: "55bc4fed-30e6-430f-a4d8-6be830c1f268"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.420244 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55bc4fed-30e6-430f-a4d8-6be830c1f268-kube-api-access-d7k8r" (OuterVolumeSpecName: "kube-api-access-d7k8r") pod "55bc4fed-30e6-430f-a4d8-6be830c1f268" (UID: "55bc4fed-30e6-430f-a4d8-6be830c1f268"). InnerVolumeSpecName "kube-api-access-d7k8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:47:33 crc kubenswrapper[4801]: E1206 03:47:33.441549 4801 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory podName:55bc4fed-30e6-430f-a4d8-6be830c1f268 nodeName:}" failed. No retries permitted until 2025-12-06 03:47:33.941520644 +0000 UTC m=+2507.064128216 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "inventory" (UniqueName: "kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory") pod "55bc4fed-30e6-430f-a4d8-6be830c1f268" (UID: "55bc4fed-30e6-430f-a4d8-6be830c1f268") : error deleting /var/lib/kubelet/pods/55bc4fed-30e6-430f-a4d8-6be830c1f268/volume-subpaths: remove /var/lib/kubelet/pods/55bc4fed-30e6-430f-a4d8-6be830c1f268/volume-subpaths: no such file or directory Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.445206 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "55bc4fed-30e6-430f-a4d8-6be830c1f268" (UID: "55bc4fed-30e6-430f-a4d8-6be830c1f268"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.513907 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.513943 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7k8r\" (UniqueName: \"kubernetes.io/projected/55bc4fed-30e6-430f-a4d8-6be830c1f268-kube-api-access-d7k8r\") on node \"crc\" DevicePath \"\"" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.513953 4801 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.513962 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.893531 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" event={"ID":"55bc4fed-30e6-430f-a4d8-6be830c1f268","Type":"ContainerDied","Data":"d70a817cb5e41f63ede07d3e40bb3bfaa8dc1976edce069adcfd6c34e225ad37"} Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.894012 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d70a817cb5e41f63ede07d3e40bb3bfaa8dc1976edce069adcfd6c34e225ad37" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.893912 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.996360 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2"] Dec 06 03:47:33 crc kubenswrapper[4801]: E1206 03:47:33.996836 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55bc4fed-30e6-430f-a4d8-6be830c1f268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.996857 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="55bc4fed-30e6-430f-a4d8-6be830c1f268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.997008 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="55bc4fed-30e6-430f-a4d8-6be830c1f268" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 03:47:33 crc kubenswrapper[4801]: I1206 03:47:33.997617 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.005945 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2"] Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.024116 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory\") pod \"55bc4fed-30e6-430f-a4d8-6be830c1f268\" (UID: \"55bc4fed-30e6-430f-a4d8-6be830c1f268\") " Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.028613 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory" (OuterVolumeSpecName: "inventory") pod "55bc4fed-30e6-430f-a4d8-6be830c1f268" (UID: "55bc4fed-30e6-430f-a4d8-6be830c1f268"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.125828 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.125986 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.126051 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.126240 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l9q9\" (UniqueName: \"kubernetes.io/projected/6d1eac30-6555-4e4f-a285-0f988967b438-kube-api-access-9l9q9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.126340 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55bc4fed-30e6-430f-a4d8-6be830c1f268-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.227168 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l9q9\" (UniqueName: \"kubernetes.io/projected/6d1eac30-6555-4e4f-a285-0f988967b438-kube-api-access-9l9q9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.227250 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.227289 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.227344 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.231130 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.231594 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.241034 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.243745 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l9q9\" (UniqueName: \"kubernetes.io/projected/6d1eac30-6555-4e4f-a285-0f988967b438-kube-api-access-9l9q9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.325033 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.819433 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2"] Dec 06 03:47:34 crc kubenswrapper[4801]: I1206 03:47:34.902857 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" event={"ID":"6d1eac30-6555-4e4f-a285-0f988967b438","Type":"ContainerStarted","Data":"01bc138aa41224ecaee3ec64050eacbe1cdc72301d7700df1d6fd95c4966042e"} Dec 06 03:47:35 crc kubenswrapper[4801]: I1206 03:47:35.912983 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" event={"ID":"6d1eac30-6555-4e4f-a285-0f988967b438","Type":"ContainerStarted","Data":"865440a755a327e116d2f91f4d08ea87a88eab73185ec9013c3ac469d3f46a4d"} Dec 06 03:47:35 crc kubenswrapper[4801]: I1206 03:47:35.934678 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" podStartSLOduration=2.144103021 podStartE2EDuration="2.934660464s" podCreationTimestamp="2025-12-06 03:47:33 +0000 UTC" firstStartedPulling="2025-12-06 03:47:34.827238363 +0000 UTC m=+2507.949845925" lastFinishedPulling="2025-12-06 03:47:35.617795796 +0000 UTC m=+2508.740403368" observedRunningTime="2025-12-06 03:47:35.929976708 +0000 UTC m=+2509.052584280" watchObservedRunningTime="2025-12-06 03:47:35.934660464 +0000 UTC m=+2509.057268036" Dec 06 03:47:42 crc kubenswrapper[4801]: I1206 03:47:42.212754 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:47:42 crc kubenswrapper[4801]: E1206 03:47:42.227001 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:47:55 crc kubenswrapper[4801]: I1206 03:47:55.212467 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:47:55 crc kubenswrapper[4801]: E1206 03:47:55.213240 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:48:02 crc kubenswrapper[4801]: I1206 03:48:02.110244 4801 generic.go:334] "Generic (PLEG): container finished" podID="6d1eac30-6555-4e4f-a285-0f988967b438" containerID="865440a755a327e116d2f91f4d08ea87a88eab73185ec9013c3ac469d3f46a4d" exitCode=0 Dec 06 03:48:02 crc kubenswrapper[4801]: I1206 03:48:02.110331 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" event={"ID":"6d1eac30-6555-4e4f-a285-0f988967b438","Type":"ContainerDied","Data":"865440a755a327e116d2f91f4d08ea87a88eab73185ec9013c3ac469d3f46a4d"} Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.513810 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.536155 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l9q9\" (UniqueName: \"kubernetes.io/projected/6d1eac30-6555-4e4f-a285-0f988967b438-kube-api-access-9l9q9\") pod \"6d1eac30-6555-4e4f-a285-0f988967b438\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.536295 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-inventory\") pod \"6d1eac30-6555-4e4f-a285-0f988967b438\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.536371 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ssh-key\") pod \"6d1eac30-6555-4e4f-a285-0f988967b438\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.536402 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ceph\") pod \"6d1eac30-6555-4e4f-a285-0f988967b438\" (UID: \"6d1eac30-6555-4e4f-a285-0f988967b438\") " Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.545180 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ceph" (OuterVolumeSpecName: "ceph") pod "6d1eac30-6555-4e4f-a285-0f988967b438" (UID: "6d1eac30-6555-4e4f-a285-0f988967b438"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.545389 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d1eac30-6555-4e4f-a285-0f988967b438-kube-api-access-9l9q9" (OuterVolumeSpecName: "kube-api-access-9l9q9") pod "6d1eac30-6555-4e4f-a285-0f988967b438" (UID: "6d1eac30-6555-4e4f-a285-0f988967b438"). InnerVolumeSpecName "kube-api-access-9l9q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.567988 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-inventory" (OuterVolumeSpecName: "inventory") pod "6d1eac30-6555-4e4f-a285-0f988967b438" (UID: "6d1eac30-6555-4e4f-a285-0f988967b438"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.574599 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6d1eac30-6555-4e4f-a285-0f988967b438" (UID: "6d1eac30-6555-4e4f-a285-0f988967b438"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.638404 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.638439 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.638448 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6d1eac30-6555-4e4f-a285-0f988967b438-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:03 crc kubenswrapper[4801]: I1206 03:48:03.638458 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l9q9\" (UniqueName: \"kubernetes.io/projected/6d1eac30-6555-4e4f-a285-0f988967b438-kube-api-access-9l9q9\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.130637 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" event={"ID":"6d1eac30-6555-4e4f-a285-0f988967b438","Type":"ContainerDied","Data":"01bc138aa41224ecaee3ec64050eacbe1cdc72301d7700df1d6fd95c4966042e"} Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.130684 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01bc138aa41224ecaee3ec64050eacbe1cdc72301d7700df1d6fd95c4966042e" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.130684 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.207554 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq"] Dec 06 03:48:04 crc kubenswrapper[4801]: E1206 03:48:04.207910 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d1eac30-6555-4e4f-a285-0f988967b438" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.207928 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d1eac30-6555-4e4f-a285-0f988967b438" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.208119 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d1eac30-6555-4e4f-a285-0f988967b438" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.208654 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.213665 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.213790 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.213937 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.213977 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.213948 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.242983 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq"] Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.250707 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.250858 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.250915 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.251004 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwj9g\" (UniqueName: \"kubernetes.io/projected/0be3f374-f93f-4533-aa01-b56ae87544a9-kube-api-access-cwj9g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.352883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwj9g\" (UniqueName: \"kubernetes.io/projected/0be3f374-f93f-4533-aa01-b56ae87544a9-kube-api-access-cwj9g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.352948 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.353022 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.353061 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.358079 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.368515 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.370619 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwj9g\" (UniqueName: \"kubernetes.io/projected/0be3f374-f93f-4533-aa01-b56ae87544a9-kube-api-access-cwj9g\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.378683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-j29xq\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.531192 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:04 crc kubenswrapper[4801]: I1206 03:48:04.993542 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq"] Dec 06 03:48:05 crc kubenswrapper[4801]: I1206 03:48:05.140456 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" event={"ID":"0be3f374-f93f-4533-aa01-b56ae87544a9","Type":"ContainerStarted","Data":"0586bb66ef94a8cda8e7f9cb95acd007d5d147edaed9449a274434ee3f2e197a"} Dec 06 03:48:06 crc kubenswrapper[4801]: I1206 03:48:06.148564 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" event={"ID":"0be3f374-f93f-4533-aa01-b56ae87544a9","Type":"ContainerStarted","Data":"74165dc4284095c394d69c68cdf896f00e25fa1002c836e5ec2d90ffa7530060"} Dec 06 03:48:06 crc kubenswrapper[4801]: I1206 03:48:06.163552 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" podStartSLOduration=1.697074356 podStartE2EDuration="2.163535035s" podCreationTimestamp="2025-12-06 03:48:04 +0000 UTC" firstStartedPulling="2025-12-06 03:48:04.99930794 +0000 UTC m=+2538.121915512" lastFinishedPulling="2025-12-06 03:48:05.465768599 +0000 UTC m=+2538.588376191" observedRunningTime="2025-12-06 03:48:06.160867363 +0000 UTC m=+2539.283474955" watchObservedRunningTime="2025-12-06 03:48:06.163535035 +0000 UTC m=+2539.286142607" Dec 06 03:48:06 crc kubenswrapper[4801]: I1206 03:48:06.214732 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:48:06 crc kubenswrapper[4801]: E1206 03:48:06.215581 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:48:11 crc kubenswrapper[4801]: I1206 03:48:11.186186 4801 generic.go:334] "Generic (PLEG): container finished" podID="0be3f374-f93f-4533-aa01-b56ae87544a9" containerID="74165dc4284095c394d69c68cdf896f00e25fa1002c836e5ec2d90ffa7530060" exitCode=0 Dec 06 03:48:11 crc kubenswrapper[4801]: I1206 03:48:11.186268 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" event={"ID":"0be3f374-f93f-4533-aa01-b56ae87544a9","Type":"ContainerDied","Data":"74165dc4284095c394d69c68cdf896f00e25fa1002c836e5ec2d90ffa7530060"} Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.616437 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.702495 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwj9g\" (UniqueName: \"kubernetes.io/projected/0be3f374-f93f-4533-aa01-b56ae87544a9-kube-api-access-cwj9g\") pod \"0be3f374-f93f-4533-aa01-b56ae87544a9\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.702551 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ssh-key\") pod \"0be3f374-f93f-4533-aa01-b56ae87544a9\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.702578 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-inventory\") pod \"0be3f374-f93f-4533-aa01-b56ae87544a9\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.702670 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ceph\") pod \"0be3f374-f93f-4533-aa01-b56ae87544a9\" (UID: \"0be3f374-f93f-4533-aa01-b56ae87544a9\") " Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.708173 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ceph" (OuterVolumeSpecName: "ceph") pod "0be3f374-f93f-4533-aa01-b56ae87544a9" (UID: "0be3f374-f93f-4533-aa01-b56ae87544a9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.708278 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be3f374-f93f-4533-aa01-b56ae87544a9-kube-api-access-cwj9g" (OuterVolumeSpecName: "kube-api-access-cwj9g") pod "0be3f374-f93f-4533-aa01-b56ae87544a9" (UID: "0be3f374-f93f-4533-aa01-b56ae87544a9"). InnerVolumeSpecName "kube-api-access-cwj9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.728247 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-inventory" (OuterVolumeSpecName: "inventory") pod "0be3f374-f93f-4533-aa01-b56ae87544a9" (UID: "0be3f374-f93f-4533-aa01-b56ae87544a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.730001 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0be3f374-f93f-4533-aa01-b56ae87544a9" (UID: "0be3f374-f93f-4533-aa01-b56ae87544a9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.803644 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwj9g\" (UniqueName: \"kubernetes.io/projected/0be3f374-f93f-4533-aa01-b56ae87544a9-kube-api-access-cwj9g\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.803868 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.803935 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:12 crc kubenswrapper[4801]: I1206 03:48:12.803988 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0be3f374-f93f-4533-aa01-b56ae87544a9-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.204012 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" event={"ID":"0be3f374-f93f-4533-aa01-b56ae87544a9","Type":"ContainerDied","Data":"0586bb66ef94a8cda8e7f9cb95acd007d5d147edaed9449a274434ee3f2e197a"} Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.204050 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0586bb66ef94a8cda8e7f9cb95acd007d5d147edaed9449a274434ee3f2e197a" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.204060 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-j29xq" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.294071 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss"] Dec 06 03:48:13 crc kubenswrapper[4801]: E1206 03:48:13.294739 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be3f374-f93f-4533-aa01-b56ae87544a9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.294765 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be3f374-f93f-4533-aa01-b56ae87544a9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.294970 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be3f374-f93f-4533-aa01-b56ae87544a9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.295555 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.298517 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.298645 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.298716 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.298655 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.301742 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.303407 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss"] Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.417156 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.417252 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xvm\" (UniqueName: \"kubernetes.io/projected/1fb7037e-6eca-42a5-b146-02594414a08b-kube-api-access-48xvm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.417308 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.417345 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.519036 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.519158 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xvm\" (UniqueName: \"kubernetes.io/projected/1fb7037e-6eca-42a5-b146-02594414a08b-kube-api-access-48xvm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.519220 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.519258 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.523517 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.523548 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.529216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.538074 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xvm\" (UniqueName: \"kubernetes.io/projected/1fb7037e-6eca-42a5-b146-02594414a08b-kube-api-access-48xvm\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-zltss\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:13 crc kubenswrapper[4801]: I1206 03:48:13.623487 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:48:14 crc kubenswrapper[4801]: I1206 03:48:14.152933 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss"] Dec 06 03:48:14 crc kubenswrapper[4801]: I1206 03:48:14.212962 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" event={"ID":"1fb7037e-6eca-42a5-b146-02594414a08b","Type":"ContainerStarted","Data":"5071dc4e6d521743a9681b25365edec0c9810384c8ece7b3d5dbfb9aafe35ad7"} Dec 06 03:48:17 crc kubenswrapper[4801]: I1206 03:48:17.237111 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" event={"ID":"1fb7037e-6eca-42a5-b146-02594414a08b","Type":"ContainerStarted","Data":"1a862fde6e4f3cfa9f266c11bcc7654a78e49818e700a261bbac8e1743238ab5"} Dec 06 03:48:17 crc kubenswrapper[4801]: I1206 03:48:17.259242 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" podStartSLOduration=3.847346371 podStartE2EDuration="4.259219374s" podCreationTimestamp="2025-12-06 03:48:13 +0000 UTC" firstStartedPulling="2025-12-06 03:48:14.159526629 +0000 UTC m=+2547.282134201" lastFinishedPulling="2025-12-06 03:48:14.571399642 +0000 UTC m=+2547.694007204" observedRunningTime="2025-12-06 03:48:17.254292562 +0000 UTC m=+2550.376900144" watchObservedRunningTime="2025-12-06 03:48:17.259219374 +0000 UTC m=+2550.381826966" Dec 06 03:48:21 crc kubenswrapper[4801]: I1206 03:48:21.211859 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:48:21 crc kubenswrapper[4801]: E1206 03:48:21.212338 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:48:33 crc kubenswrapper[4801]: I1206 03:48:33.213016 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:48:33 crc kubenswrapper[4801]: E1206 03:48:33.213703 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:48:45 crc kubenswrapper[4801]: I1206 03:48:45.213061 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:48:45 crc kubenswrapper[4801]: E1206 03:48:45.213828 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:48:57 crc kubenswrapper[4801]: I1206 03:48:57.217807 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:48:57 crc kubenswrapper[4801]: E1206 03:48:57.218577 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:48:58 crc kubenswrapper[4801]: I1206 03:48:58.580820 4801 generic.go:334] "Generic (PLEG): container finished" podID="1fb7037e-6eca-42a5-b146-02594414a08b" containerID="1a862fde6e4f3cfa9f266c11bcc7654a78e49818e700a261bbac8e1743238ab5" exitCode=0 Dec 06 03:48:58 crc kubenswrapper[4801]: I1206 03:48:58.580923 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" event={"ID":"1fb7037e-6eca-42a5-b146-02594414a08b","Type":"ContainerDied","Data":"1a862fde6e4f3cfa9f266c11bcc7654a78e49818e700a261bbac8e1743238ab5"} Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.020391 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.131362 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ssh-key\") pod \"1fb7037e-6eca-42a5-b146-02594414a08b\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.131412 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48xvm\" (UniqueName: \"kubernetes.io/projected/1fb7037e-6eca-42a5-b146-02594414a08b-kube-api-access-48xvm\") pod \"1fb7037e-6eca-42a5-b146-02594414a08b\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.131641 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-inventory\") pod \"1fb7037e-6eca-42a5-b146-02594414a08b\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.131774 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ceph\") pod \"1fb7037e-6eca-42a5-b146-02594414a08b\" (UID: \"1fb7037e-6eca-42a5-b146-02594414a08b\") " Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.141795 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ceph" (OuterVolumeSpecName: "ceph") pod "1fb7037e-6eca-42a5-b146-02594414a08b" (UID: "1fb7037e-6eca-42a5-b146-02594414a08b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.141848 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb7037e-6eca-42a5-b146-02594414a08b-kube-api-access-48xvm" (OuterVolumeSpecName: "kube-api-access-48xvm") pod "1fb7037e-6eca-42a5-b146-02594414a08b" (UID: "1fb7037e-6eca-42a5-b146-02594414a08b"). InnerVolumeSpecName "kube-api-access-48xvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.157685 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1fb7037e-6eca-42a5-b146-02594414a08b" (UID: "1fb7037e-6eca-42a5-b146-02594414a08b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.158390 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-inventory" (OuterVolumeSpecName: "inventory") pod "1fb7037e-6eca-42a5-b146-02594414a08b" (UID: "1fb7037e-6eca-42a5-b146-02594414a08b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.233818 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.233849 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.233862 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48xvm\" (UniqueName: \"kubernetes.io/projected/1fb7037e-6eca-42a5-b146-02594414a08b-kube-api-access-48xvm\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.233872 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fb7037e-6eca-42a5-b146-02594414a08b-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.595136 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" event={"ID":"1fb7037e-6eca-42a5-b146-02594414a08b","Type":"ContainerDied","Data":"5071dc4e6d521743a9681b25365edec0c9810384c8ece7b3d5dbfb9aafe35ad7"} Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.595175 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5071dc4e6d521743a9681b25365edec0c9810384c8ece7b3d5dbfb9aafe35ad7" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.595188 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-zltss" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.673692 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv"] Dec 06 03:49:00 crc kubenswrapper[4801]: E1206 03:49:00.674122 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb7037e-6eca-42a5-b146-02594414a08b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.674142 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb7037e-6eca-42a5-b146-02594414a08b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.674330 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb7037e-6eca-42a5-b146-02594414a08b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.675070 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.677260 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.677514 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.678354 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.678525 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.681425 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.688861 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv"] Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.743185 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.743649 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcd2h\" (UniqueName: \"kubernetes.io/projected/ace31379-943d-48d3-b156-c449eae9325c-kube-api-access-vcd2h\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.743914 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.744120 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.845718 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.846059 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.846213 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcd2h\" (UniqueName: \"kubernetes.io/projected/ace31379-943d-48d3-b156-c449eae9325c-kube-api-access-vcd2h\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.846321 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.851256 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.851308 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.851883 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.863715 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcd2h\" (UniqueName: \"kubernetes.io/projected/ace31379-943d-48d3-b156-c449eae9325c-kube-api-access-vcd2h\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:00 crc kubenswrapper[4801]: I1206 03:49:00.992909 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:01 crc kubenswrapper[4801]: I1206 03:49:01.481835 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv"] Dec 06 03:49:01 crc kubenswrapper[4801]: I1206 03:49:01.603319 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" event={"ID":"ace31379-943d-48d3-b156-c449eae9325c","Type":"ContainerStarted","Data":"18000d4b7940b376914b5bb175e61a346efc3e05986639f09e1a6c209503462c"} Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.146948 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lt5s8"] Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.149026 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.156326 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lt5s8"] Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.271505 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-utilities\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.271570 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pghj5\" (UniqueName: \"kubernetes.io/projected/33117e65-6b16-4967-8031-847e1009dad9-kube-api-access-pghj5\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.271682 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-catalog-content\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.373002 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-catalog-content\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.373097 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-utilities\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.373134 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pghj5\" (UniqueName: \"kubernetes.io/projected/33117e65-6b16-4967-8031-847e1009dad9-kube-api-access-pghj5\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.373792 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-catalog-content\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.373816 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-utilities\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.396359 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pghj5\" (UniqueName: \"kubernetes.io/projected/33117e65-6b16-4967-8031-847e1009dad9-kube-api-access-pghj5\") pod \"redhat-operators-lt5s8\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.524916 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.621321 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" event={"ID":"ace31379-943d-48d3-b156-c449eae9325c","Type":"ContainerStarted","Data":"66d584ccfdf8943774a8d1608c327ba998c91cc4dc319632c2383e117a7f98c1"} Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.650196 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" podStartSLOduration=2.100504461 podStartE2EDuration="2.650178503s" podCreationTimestamp="2025-12-06 03:49:00 +0000 UTC" firstStartedPulling="2025-12-06 03:49:01.497818248 +0000 UTC m=+2594.620425820" lastFinishedPulling="2025-12-06 03:49:02.04749229 +0000 UTC m=+2595.170099862" observedRunningTime="2025-12-06 03:49:02.644667765 +0000 UTC m=+2595.767275337" watchObservedRunningTime="2025-12-06 03:49:02.650178503 +0000 UTC m=+2595.772786075" Dec 06 03:49:02 crc kubenswrapper[4801]: I1206 03:49:02.985501 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lt5s8"] Dec 06 03:49:03 crc kubenswrapper[4801]: I1206 03:49:03.629372 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerStarted","Data":"d186c0576a3a66f0d5ed0b8cb652789fc9c5e2bbf9f6f01f90a0f9b6c05c7d8e"} Dec 06 03:49:04 crc kubenswrapper[4801]: I1206 03:49:04.636972 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerStarted","Data":"8e61f22b1dcd1cc2aab472b452236d51205640eef12a046e25b06aa61d28e0f8"} Dec 06 03:49:05 crc kubenswrapper[4801]: I1206 03:49:05.650237 4801 generic.go:334] "Generic (PLEG): container finished" podID="33117e65-6b16-4967-8031-847e1009dad9" containerID="8e61f22b1dcd1cc2aab472b452236d51205640eef12a046e25b06aa61d28e0f8" exitCode=0 Dec 06 03:49:05 crc kubenswrapper[4801]: I1206 03:49:05.650588 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerDied","Data":"8e61f22b1dcd1cc2aab472b452236d51205640eef12a046e25b06aa61d28e0f8"} Dec 06 03:49:07 crc kubenswrapper[4801]: I1206 03:49:07.694721 4801 generic.go:334] "Generic (PLEG): container finished" podID="ace31379-943d-48d3-b156-c449eae9325c" containerID="66d584ccfdf8943774a8d1608c327ba998c91cc4dc319632c2383e117a7f98c1" exitCode=0 Dec 06 03:49:07 crc kubenswrapper[4801]: I1206 03:49:07.694794 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" event={"ID":"ace31379-943d-48d3-b156-c449eae9325c","Type":"ContainerDied","Data":"66d584ccfdf8943774a8d1608c327ba998c91cc4dc319632c2383e117a7f98c1"} Dec 06 03:49:07 crc kubenswrapper[4801]: I1206 03:49:07.697405 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerStarted","Data":"7761b5057660af6dff4726ad474f7ee303762ebd19cbfa6bbaa406f835e86074"} Dec 06 03:49:08 crc kubenswrapper[4801]: I1206 03:49:08.710295 4801 generic.go:334] "Generic (PLEG): container finished" podID="33117e65-6b16-4967-8031-847e1009dad9" containerID="7761b5057660af6dff4726ad474f7ee303762ebd19cbfa6bbaa406f835e86074" exitCode=0 Dec 06 03:49:08 crc kubenswrapper[4801]: I1206 03:49:08.710361 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerDied","Data":"7761b5057660af6dff4726ad474f7ee303762ebd19cbfa6bbaa406f835e86074"} Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.258351 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.344892 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcd2h\" (UniqueName: \"kubernetes.io/projected/ace31379-943d-48d3-b156-c449eae9325c-kube-api-access-vcd2h\") pod \"ace31379-943d-48d3-b156-c449eae9325c\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.344971 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ceph\") pod \"ace31379-943d-48d3-b156-c449eae9325c\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.345107 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-inventory\") pod \"ace31379-943d-48d3-b156-c449eae9325c\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.345214 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ssh-key\") pod \"ace31379-943d-48d3-b156-c449eae9325c\" (UID: \"ace31379-943d-48d3-b156-c449eae9325c\") " Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.351185 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ace31379-943d-48d3-b156-c449eae9325c-kube-api-access-vcd2h" (OuterVolumeSpecName: "kube-api-access-vcd2h") pod "ace31379-943d-48d3-b156-c449eae9325c" (UID: "ace31379-943d-48d3-b156-c449eae9325c"). InnerVolumeSpecName "kube-api-access-vcd2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.367088 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ceph" (OuterVolumeSpecName: "ceph") pod "ace31379-943d-48d3-b156-c449eae9325c" (UID: "ace31379-943d-48d3-b156-c449eae9325c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.372730 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ace31379-943d-48d3-b156-c449eae9325c" (UID: "ace31379-943d-48d3-b156-c449eae9325c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.373979 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-inventory" (OuterVolumeSpecName: "inventory") pod "ace31379-943d-48d3-b156-c449eae9325c" (UID: "ace31379-943d-48d3-b156-c449eae9325c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.446980 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.447007 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.447016 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcd2h\" (UniqueName: \"kubernetes.io/projected/ace31379-943d-48d3-b156-c449eae9325c-kube-api-access-vcd2h\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.447025 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ace31379-943d-48d3-b156-c449eae9325c-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.719645 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" event={"ID":"ace31379-943d-48d3-b156-c449eae9325c","Type":"ContainerDied","Data":"18000d4b7940b376914b5bb175e61a346efc3e05986639f09e1a6c209503462c"} Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.720788 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18000d4b7940b376914b5bb175e61a346efc3e05986639f09e1a6c209503462c" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.719693 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.825500 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst"] Dec 06 03:49:09 crc kubenswrapper[4801]: E1206 03:49:09.825907 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ace31379-943d-48d3-b156-c449eae9325c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.825924 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ace31379-943d-48d3-b156-c449eae9325c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.826102 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ace31379-943d-48d3-b156-c449eae9325c" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.826674 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.828671 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.828947 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.829160 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.829443 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.830483 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.837204 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst"] Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.854895 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.855082 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.855218 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.855302 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9zpd\" (UniqueName: \"kubernetes.io/projected/9795699b-76ac-46ce-a6bf-0898ea8817f1-kube-api-access-r9zpd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.957169 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9zpd\" (UniqueName: \"kubernetes.io/projected/9795699b-76ac-46ce-a6bf-0898ea8817f1-kube-api-access-r9zpd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.957237 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.957267 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.957399 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.964418 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.965103 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.965121 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:09 crc kubenswrapper[4801]: I1206 03:49:09.977564 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9zpd\" (UniqueName: \"kubernetes.io/projected/9795699b-76ac-46ce-a6bf-0898ea8817f1-kube-api-access-r9zpd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-29mst\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:10 crc kubenswrapper[4801]: I1206 03:49:10.148460 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:49:10 crc kubenswrapper[4801]: I1206 03:49:10.212327 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:49:10 crc kubenswrapper[4801]: E1206 03:49:10.212602 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:49:10 crc kubenswrapper[4801]: I1206 03:49:10.669220 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst"] Dec 06 03:49:10 crc kubenswrapper[4801]: W1206 03:49:10.670929 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9795699b_76ac_46ce_a6bf_0898ea8817f1.slice/crio-837e6db706c21148bf84ff2f171aefae0a5afc829ffc9d2b82b9d41ab92a0e6e WatchSource:0}: Error finding container 837e6db706c21148bf84ff2f171aefae0a5afc829ffc9d2b82b9d41ab92a0e6e: Status 404 returned error can't find the container with id 837e6db706c21148bf84ff2f171aefae0a5afc829ffc9d2b82b9d41ab92a0e6e Dec 06 03:49:10 crc kubenswrapper[4801]: I1206 03:49:10.734227 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" event={"ID":"9795699b-76ac-46ce-a6bf-0898ea8817f1","Type":"ContainerStarted","Data":"837e6db706c21148bf84ff2f171aefae0a5afc829ffc9d2b82b9d41ab92a0e6e"} Dec 06 03:49:13 crc kubenswrapper[4801]: I1206 03:49:13.761023 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" event={"ID":"9795699b-76ac-46ce-a6bf-0898ea8817f1","Type":"ContainerStarted","Data":"4fbe7c125a7e8d9b2414600dbe2e3984b7afc1cc1cabfb0a6522dbfafdb83197"} Dec 06 03:49:13 crc kubenswrapper[4801]: I1206 03:49:13.765269 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerStarted","Data":"0893d54a3ee91655033220aa13dad6b57027c99d248de3dab9cf73b5346973eb"} Dec 06 03:49:13 crc kubenswrapper[4801]: I1206 03:49:13.790691 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" podStartSLOduration=1.995872603 podStartE2EDuration="4.790670206s" podCreationTimestamp="2025-12-06 03:49:09 +0000 UTC" firstStartedPulling="2025-12-06 03:49:10.67381296 +0000 UTC m=+2603.796420532" lastFinishedPulling="2025-12-06 03:49:13.468610563 +0000 UTC m=+2606.591218135" observedRunningTime="2025-12-06 03:49:13.788635872 +0000 UTC m=+2606.911243444" watchObservedRunningTime="2025-12-06 03:49:13.790670206 +0000 UTC m=+2606.913277788" Dec 06 03:49:13 crc kubenswrapper[4801]: I1206 03:49:13.823271 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lt5s8" podStartSLOduration=4.124401069 podStartE2EDuration="11.82324672s" podCreationTimestamp="2025-12-06 03:49:02 +0000 UTC" firstStartedPulling="2025-12-06 03:49:05.653159773 +0000 UTC m=+2598.775767345" lastFinishedPulling="2025-12-06 03:49:13.352005424 +0000 UTC m=+2606.474612996" observedRunningTime="2025-12-06 03:49:13.81617366 +0000 UTC m=+2606.938781252" watchObservedRunningTime="2025-12-06 03:49:13.82324672 +0000 UTC m=+2606.945854302" Dec 06 03:49:22 crc kubenswrapper[4801]: I1206 03:49:22.525153 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:22 crc kubenswrapper[4801]: I1206 03:49:22.536206 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:22 crc kubenswrapper[4801]: I1206 03:49:22.591405 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:22 crc kubenswrapper[4801]: I1206 03:49:22.885943 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:22 crc kubenswrapper[4801]: I1206 03:49:22.944667 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lt5s8"] Dec 06 03:49:24 crc kubenswrapper[4801]: I1206 03:49:24.212378 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:49:24 crc kubenswrapper[4801]: I1206 03:49:24.845122 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lt5s8" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="registry-server" containerID="cri-o://0893d54a3ee91655033220aa13dad6b57027c99d248de3dab9cf73b5346973eb" gracePeriod=2 Dec 06 03:49:26 crc kubenswrapper[4801]: I1206 03:49:26.863075 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"c79b390ea3581522085e63ad693438b7dd55b8490e583df33beab6dd02e9de42"} Dec 06 03:49:27 crc kubenswrapper[4801]: I1206 03:49:27.873712 4801 generic.go:334] "Generic (PLEG): container finished" podID="33117e65-6b16-4967-8031-847e1009dad9" containerID="0893d54a3ee91655033220aa13dad6b57027c99d248de3dab9cf73b5346973eb" exitCode=0 Dec 06 03:49:27 crc kubenswrapper[4801]: I1206 03:49:27.873814 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerDied","Data":"0893d54a3ee91655033220aa13dad6b57027c99d248de3dab9cf73b5346973eb"} Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.862786 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.903880 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lt5s8" event={"ID":"33117e65-6b16-4967-8031-847e1009dad9","Type":"ContainerDied","Data":"d186c0576a3a66f0d5ed0b8cb652789fc9c5e2bbf9f6f01f90a0f9b6c05c7d8e"} Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.903957 4801 scope.go:117] "RemoveContainer" containerID="0893d54a3ee91655033220aa13dad6b57027c99d248de3dab9cf73b5346973eb" Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.904007 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lt5s8" Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.905495 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-catalog-content\") pod \"33117e65-6b16-4967-8031-847e1009dad9\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.905621 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-utilities\") pod \"33117e65-6b16-4967-8031-847e1009dad9\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.905738 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pghj5\" (UniqueName: \"kubernetes.io/projected/33117e65-6b16-4967-8031-847e1009dad9-kube-api-access-pghj5\") pod \"33117e65-6b16-4967-8031-847e1009dad9\" (UID: \"33117e65-6b16-4967-8031-847e1009dad9\") " Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.907726 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-utilities" (OuterVolumeSpecName: "utilities") pod "33117e65-6b16-4967-8031-847e1009dad9" (UID: "33117e65-6b16-4967-8031-847e1009dad9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.915954 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33117e65-6b16-4967-8031-847e1009dad9-kube-api-access-pghj5" (OuterVolumeSpecName: "kube-api-access-pghj5") pod "33117e65-6b16-4967-8031-847e1009dad9" (UID: "33117e65-6b16-4967-8031-847e1009dad9"). InnerVolumeSpecName "kube-api-access-pghj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.975915 4801 scope.go:117] "RemoveContainer" containerID="7761b5057660af6dff4726ad474f7ee303762ebd19cbfa6bbaa406f835e86074" Dec 06 03:49:28 crc kubenswrapper[4801]: I1206 03:49:28.999306 4801 scope.go:117] "RemoveContainer" containerID="8e61f22b1dcd1cc2aab472b452236d51205640eef12a046e25b06aa61d28e0f8" Dec 06 03:49:29 crc kubenswrapper[4801]: I1206 03:49:29.008318 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:29 crc kubenswrapper[4801]: I1206 03:49:29.008347 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pghj5\" (UniqueName: \"kubernetes.io/projected/33117e65-6b16-4967-8031-847e1009dad9-kube-api-access-pghj5\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:29 crc kubenswrapper[4801]: I1206 03:49:29.042369 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33117e65-6b16-4967-8031-847e1009dad9" (UID: "33117e65-6b16-4967-8031-847e1009dad9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:49:29 crc kubenswrapper[4801]: I1206 03:49:29.109909 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33117e65-6b16-4967-8031-847e1009dad9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:49:29 crc kubenswrapper[4801]: I1206 03:49:29.242431 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lt5s8"] Dec 06 03:49:29 crc kubenswrapper[4801]: I1206 03:49:29.249961 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lt5s8"] Dec 06 03:49:31 crc kubenswrapper[4801]: I1206 03:49:31.223521 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33117e65-6b16-4967-8031-847e1009dad9" path="/var/lib/kubelet/pods/33117e65-6b16-4967-8031-847e1009dad9/volumes" Dec 06 03:50:05 crc kubenswrapper[4801]: I1206 03:50:05.203383 4801 generic.go:334] "Generic (PLEG): container finished" podID="9795699b-76ac-46ce-a6bf-0898ea8817f1" containerID="4fbe7c125a7e8d9b2414600dbe2e3984b7afc1cc1cabfb0a6522dbfafdb83197" exitCode=0 Dec 06 03:50:05 crc kubenswrapper[4801]: I1206 03:50:05.203468 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" event={"ID":"9795699b-76ac-46ce-a6bf-0898ea8817f1","Type":"ContainerDied","Data":"4fbe7c125a7e8d9b2414600dbe2e3984b7afc1cc1cabfb0a6522dbfafdb83197"} Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.782185 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.895729 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ssh-key\") pod \"9795699b-76ac-46ce-a6bf-0898ea8817f1\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.896281 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9zpd\" (UniqueName: \"kubernetes.io/projected/9795699b-76ac-46ce-a6bf-0898ea8817f1-kube-api-access-r9zpd\") pod \"9795699b-76ac-46ce-a6bf-0898ea8817f1\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.896340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ceph\") pod \"9795699b-76ac-46ce-a6bf-0898ea8817f1\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.896371 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-inventory\") pod \"9795699b-76ac-46ce-a6bf-0898ea8817f1\" (UID: \"9795699b-76ac-46ce-a6bf-0898ea8817f1\") " Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.905345 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9795699b-76ac-46ce-a6bf-0898ea8817f1-kube-api-access-r9zpd" (OuterVolumeSpecName: "kube-api-access-r9zpd") pod "9795699b-76ac-46ce-a6bf-0898ea8817f1" (UID: "9795699b-76ac-46ce-a6bf-0898ea8817f1"). InnerVolumeSpecName "kube-api-access-r9zpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.905592 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ceph" (OuterVolumeSpecName: "ceph") pod "9795699b-76ac-46ce-a6bf-0898ea8817f1" (UID: "9795699b-76ac-46ce-a6bf-0898ea8817f1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.926520 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9795699b-76ac-46ce-a6bf-0898ea8817f1" (UID: "9795699b-76ac-46ce-a6bf-0898ea8817f1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:06 crc kubenswrapper[4801]: I1206 03:50:06.938416 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-inventory" (OuterVolumeSpecName: "inventory") pod "9795699b-76ac-46ce-a6bf-0898ea8817f1" (UID: "9795699b-76ac-46ce-a6bf-0898ea8817f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.000327 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9zpd\" (UniqueName: \"kubernetes.io/projected/9795699b-76ac-46ce-a6bf-0898ea8817f1-kube-api-access-r9zpd\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.000387 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.000412 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.000431 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9795699b-76ac-46ce-a6bf-0898ea8817f1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.226471 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" event={"ID":"9795699b-76ac-46ce-a6bf-0898ea8817f1","Type":"ContainerDied","Data":"837e6db706c21148bf84ff2f171aefae0a5afc829ffc9d2b82b9d41ab92a0e6e"} Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.226514 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="837e6db706c21148bf84ff2f171aefae0a5afc829ffc9d2b82b9d41ab92a0e6e" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.226570 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-29mst" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.334743 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c9sqg"] Dec 06 03:50:07 crc kubenswrapper[4801]: E1206 03:50:07.335248 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="extract-content" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.335269 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="extract-content" Dec 06 03:50:07 crc kubenswrapper[4801]: E1206 03:50:07.335296 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="registry-server" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.335305 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="registry-server" Dec 06 03:50:07 crc kubenswrapper[4801]: E1206 03:50:07.335325 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="extract-utilities" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.335334 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="extract-utilities" Dec 06 03:50:07 crc kubenswrapper[4801]: E1206 03:50:07.335352 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9795699b-76ac-46ce-a6bf-0898ea8817f1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.335361 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="9795699b-76ac-46ce-a6bf-0898ea8817f1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.335597 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="33117e65-6b16-4967-8031-847e1009dad9" containerName="registry-server" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.335632 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="9795699b-76ac-46ce-a6bf-0898ea8817f1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.336416 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.341440 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.341866 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.342001 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.342159 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.343298 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.346919 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c9sqg"] Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.409497 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gqgk\" (UniqueName: \"kubernetes.io/projected/497b1d32-7e25-419a-9daa-425b6de5889c-kube-api-access-2gqgk\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.409574 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.409601 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.409746 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ceph\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.511630 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gqgk\" (UniqueName: \"kubernetes.io/projected/497b1d32-7e25-419a-9daa-425b6de5889c-kube-api-access-2gqgk\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.511998 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.512153 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.512348 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ceph\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.516880 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ceph\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.516894 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.521235 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.532548 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gqgk\" (UniqueName: \"kubernetes.io/projected/497b1d32-7e25-419a-9daa-425b6de5889c-kube-api-access-2gqgk\") pod \"ssh-known-hosts-edpm-deployment-c9sqg\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:07 crc kubenswrapper[4801]: I1206 03:50:07.654482 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:08 crc kubenswrapper[4801]: I1206 03:50:08.188348 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-c9sqg"] Dec 06 03:50:08 crc kubenswrapper[4801]: I1206 03:50:08.234646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" event={"ID":"497b1d32-7e25-419a-9daa-425b6de5889c","Type":"ContainerStarted","Data":"d4f59fbbc30640db641966ead8aa17eb45c993022f3bda7074f4752871c0a44c"} Dec 06 03:50:09 crc kubenswrapper[4801]: I1206 03:50:09.243095 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" event={"ID":"497b1d32-7e25-419a-9daa-425b6de5889c","Type":"ContainerStarted","Data":"c800969220d6ab0c7da661b2f3ad38ac79578ffc23ccb22abc1a923f599e6596"} Dec 06 03:50:09 crc kubenswrapper[4801]: I1206 03:50:09.266693 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" podStartSLOduration=1.83478084 podStartE2EDuration="2.266675922s" podCreationTimestamp="2025-12-06 03:50:07 +0000 UTC" firstStartedPulling="2025-12-06 03:50:08.192014506 +0000 UTC m=+2661.314622078" lastFinishedPulling="2025-12-06 03:50:08.623909588 +0000 UTC m=+2661.746517160" observedRunningTime="2025-12-06 03:50:09.263120446 +0000 UTC m=+2662.385728018" watchObservedRunningTime="2025-12-06 03:50:09.266675922 +0000 UTC m=+2662.389283494" Dec 06 03:50:19 crc kubenswrapper[4801]: I1206 03:50:19.346060 4801 generic.go:334] "Generic (PLEG): container finished" podID="497b1d32-7e25-419a-9daa-425b6de5889c" containerID="c800969220d6ab0c7da661b2f3ad38ac79578ffc23ccb22abc1a923f599e6596" exitCode=0 Dec 06 03:50:19 crc kubenswrapper[4801]: I1206 03:50:19.346197 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" event={"ID":"497b1d32-7e25-419a-9daa-425b6de5889c","Type":"ContainerDied","Data":"c800969220d6ab0c7da661b2f3ad38ac79578ffc23ccb22abc1a923f599e6596"} Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.729372 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.882254 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ceph\") pod \"497b1d32-7e25-419a-9daa-425b6de5889c\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.882439 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-inventory-0\") pod \"497b1d32-7e25-419a-9daa-425b6de5889c\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.882462 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ssh-key-openstack-edpm-ipam\") pod \"497b1d32-7e25-419a-9daa-425b6de5889c\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.882505 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gqgk\" (UniqueName: \"kubernetes.io/projected/497b1d32-7e25-419a-9daa-425b6de5889c-kube-api-access-2gqgk\") pod \"497b1d32-7e25-419a-9daa-425b6de5889c\" (UID: \"497b1d32-7e25-419a-9daa-425b6de5889c\") " Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.887835 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ceph" (OuterVolumeSpecName: "ceph") pod "497b1d32-7e25-419a-9daa-425b6de5889c" (UID: "497b1d32-7e25-419a-9daa-425b6de5889c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.887884 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497b1d32-7e25-419a-9daa-425b6de5889c-kube-api-access-2gqgk" (OuterVolumeSpecName: "kube-api-access-2gqgk") pod "497b1d32-7e25-419a-9daa-425b6de5889c" (UID: "497b1d32-7e25-419a-9daa-425b6de5889c"). InnerVolumeSpecName "kube-api-access-2gqgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.906951 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "497b1d32-7e25-419a-9daa-425b6de5889c" (UID: "497b1d32-7e25-419a-9daa-425b6de5889c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.911791 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "497b1d32-7e25-419a-9daa-425b6de5889c" (UID: "497b1d32-7e25-419a-9daa-425b6de5889c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.984853 4801 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.984894 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.984905 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gqgk\" (UniqueName: \"kubernetes.io/projected/497b1d32-7e25-419a-9daa-425b6de5889c-kube-api-access-2gqgk\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:20 crc kubenswrapper[4801]: I1206 03:50:20.984914 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/497b1d32-7e25-419a-9daa-425b6de5889c-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.381177 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" event={"ID":"497b1d32-7e25-419a-9daa-425b6de5889c","Type":"ContainerDied","Data":"d4f59fbbc30640db641966ead8aa17eb45c993022f3bda7074f4752871c0a44c"} Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.381215 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4f59fbbc30640db641966ead8aa17eb45c993022f3bda7074f4752871c0a44c" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.381235 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-c9sqg" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.435639 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx"] Dec 06 03:50:21 crc kubenswrapper[4801]: E1206 03:50:21.438415 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497b1d32-7e25-419a-9daa-425b6de5889c" containerName="ssh-known-hosts-edpm-deployment" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.438441 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="497b1d32-7e25-419a-9daa-425b6de5889c" containerName="ssh-known-hosts-edpm-deployment" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.438642 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="497b1d32-7e25-419a-9daa-425b6de5889c" containerName="ssh-known-hosts-edpm-deployment" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.439250 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.443572 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.445407 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.447569 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.447681 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.447814 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.456840 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx"] Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.598392 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.598717 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9kpc\" (UniqueName: \"kubernetes.io/projected/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-kube-api-access-m9kpc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.598935 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.599050 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.700810 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.700977 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.701028 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9kpc\" (UniqueName: \"kubernetes.io/projected/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-kube-api-access-m9kpc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.701079 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.706605 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.707220 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.707713 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.719788 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9kpc\" (UniqueName: \"kubernetes.io/projected/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-kube-api-access-m9kpc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pwxrx\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:21 crc kubenswrapper[4801]: I1206 03:50:21.767619 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:22 crc kubenswrapper[4801]: I1206 03:50:22.297000 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx"] Dec 06 03:50:22 crc kubenswrapper[4801]: I1206 03:50:22.389367 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" event={"ID":"40993b38-48ff-41fb-90a7-9c9fc03dd1e3","Type":"ContainerStarted","Data":"97511e0816d03dd399831da398f74e292d1adcf80eb3d3c063856d556d9d8206"} Dec 06 03:50:23 crc kubenswrapper[4801]: I1206 03:50:23.399492 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" event={"ID":"40993b38-48ff-41fb-90a7-9c9fc03dd1e3","Type":"ContainerStarted","Data":"0a2dea7e5cfcce899b572cc378810a5ade0e5a2daeeaf14df42d6bbd47ce1254"} Dec 06 03:50:23 crc kubenswrapper[4801]: I1206 03:50:23.418341 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" podStartSLOduration=1.997562629 podStartE2EDuration="2.418320149s" podCreationTimestamp="2025-12-06 03:50:21 +0000 UTC" firstStartedPulling="2025-12-06 03:50:22.306199465 +0000 UTC m=+2675.428807047" lastFinishedPulling="2025-12-06 03:50:22.726956975 +0000 UTC m=+2675.849564567" observedRunningTime="2025-12-06 03:50:23.412728398 +0000 UTC m=+2676.535336000" watchObservedRunningTime="2025-12-06 03:50:23.418320149 +0000 UTC m=+2676.540927721" Dec 06 03:50:31 crc kubenswrapper[4801]: I1206 03:50:31.468319 4801 generic.go:334] "Generic (PLEG): container finished" podID="40993b38-48ff-41fb-90a7-9c9fc03dd1e3" containerID="0a2dea7e5cfcce899b572cc378810a5ade0e5a2daeeaf14df42d6bbd47ce1254" exitCode=0 Dec 06 03:50:31 crc kubenswrapper[4801]: I1206 03:50:31.468402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" event={"ID":"40993b38-48ff-41fb-90a7-9c9fc03dd1e3","Type":"ContainerDied","Data":"0a2dea7e5cfcce899b572cc378810a5ade0e5a2daeeaf14df42d6bbd47ce1254"} Dec 06 03:50:32 crc kubenswrapper[4801]: I1206 03:50:32.888130 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.018558 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ceph\") pod \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.018692 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ssh-key\") pod \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.018776 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-inventory\") pod \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.018816 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9kpc\" (UniqueName: \"kubernetes.io/projected/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-kube-api-access-m9kpc\") pod \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\" (UID: \"40993b38-48ff-41fb-90a7-9c9fc03dd1e3\") " Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.028946 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ceph" (OuterVolumeSpecName: "ceph") pod "40993b38-48ff-41fb-90a7-9c9fc03dd1e3" (UID: "40993b38-48ff-41fb-90a7-9c9fc03dd1e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.032912 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-kube-api-access-m9kpc" (OuterVolumeSpecName: "kube-api-access-m9kpc") pod "40993b38-48ff-41fb-90a7-9c9fc03dd1e3" (UID: "40993b38-48ff-41fb-90a7-9c9fc03dd1e3"). InnerVolumeSpecName "kube-api-access-m9kpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.046089 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40993b38-48ff-41fb-90a7-9c9fc03dd1e3" (UID: "40993b38-48ff-41fb-90a7-9c9fc03dd1e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.054884 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-inventory" (OuterVolumeSpecName: "inventory") pod "40993b38-48ff-41fb-90a7-9c9fc03dd1e3" (UID: "40993b38-48ff-41fb-90a7-9c9fc03dd1e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.121175 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.121205 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.121216 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.121225 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9kpc\" (UniqueName: \"kubernetes.io/projected/40993b38-48ff-41fb-90a7-9c9fc03dd1e3-kube-api-access-m9kpc\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.486600 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" event={"ID":"40993b38-48ff-41fb-90a7-9c9fc03dd1e3","Type":"ContainerDied","Data":"97511e0816d03dd399831da398f74e292d1adcf80eb3d3c063856d556d9d8206"} Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.486852 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97511e0816d03dd399831da398f74e292d1adcf80eb3d3c063856d556d9d8206" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.486637 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pwxrx" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.970080 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv"] Dec 06 03:50:33 crc kubenswrapper[4801]: E1206 03:50:33.970496 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40993b38-48ff-41fb-90a7-9c9fc03dd1e3" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.970511 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="40993b38-48ff-41fb-90a7-9c9fc03dd1e3" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.970679 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="40993b38-48ff-41fb-90a7-9c9fc03dd1e3" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.971454 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.977456 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.977619 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.977724 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.977867 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:50:33 crc kubenswrapper[4801]: I1206 03:50:33.978059 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.007454 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv"] Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.041925 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckk9p\" (UniqueName: \"kubernetes.io/projected/8c6a6819-7858-49d3-acc8-5b3cf8660213-kube-api-access-ckk9p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.041994 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.042082 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.042112 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.144608 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.144795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.144843 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.144914 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckk9p\" (UniqueName: \"kubernetes.io/projected/8c6a6819-7858-49d3-acc8-5b3cf8660213-kube-api-access-ckk9p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.150427 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.150581 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.162534 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.165990 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckk9p\" (UniqueName: \"kubernetes.io/projected/8c6a6819-7858-49d3-acc8-5b3cf8660213-kube-api-access-ckk9p\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.307677 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:34 crc kubenswrapper[4801]: I1206 03:50:34.833733 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv"] Dec 06 03:50:35 crc kubenswrapper[4801]: I1206 03:50:35.504032 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" event={"ID":"8c6a6819-7858-49d3-acc8-5b3cf8660213","Type":"ContainerStarted","Data":"1bc8227015bdb1c92431cfd70b6be4fbf024aa76c54c93d4bfcdfcbfbec22b98"} Dec 06 03:50:35 crc kubenswrapper[4801]: I1206 03:50:35.504354 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" event={"ID":"8c6a6819-7858-49d3-acc8-5b3cf8660213","Type":"ContainerStarted","Data":"3ed35ee1b693762097a1128c30e5aefcdad5b5c2f97942e99473a128899f9e02"} Dec 06 03:50:35 crc kubenswrapper[4801]: I1206 03:50:35.521847 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" podStartSLOduration=2.158311164 podStartE2EDuration="2.521831462s" podCreationTimestamp="2025-12-06 03:50:33 +0000 UTC" firstStartedPulling="2025-12-06 03:50:34.839390199 +0000 UTC m=+2687.961997771" lastFinishedPulling="2025-12-06 03:50:35.202910497 +0000 UTC m=+2688.325518069" observedRunningTime="2025-12-06 03:50:35.520323092 +0000 UTC m=+2688.642930664" watchObservedRunningTime="2025-12-06 03:50:35.521831462 +0000 UTC m=+2688.644439024" Dec 06 03:50:45 crc kubenswrapper[4801]: I1206 03:50:45.583926 4801 generic.go:334] "Generic (PLEG): container finished" podID="8c6a6819-7858-49d3-acc8-5b3cf8660213" containerID="1bc8227015bdb1c92431cfd70b6be4fbf024aa76c54c93d4bfcdfcbfbec22b98" exitCode=0 Dec 06 03:50:45 crc kubenswrapper[4801]: I1206 03:50:45.583981 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" event={"ID":"8c6a6819-7858-49d3-acc8-5b3cf8660213","Type":"ContainerDied","Data":"1bc8227015bdb1c92431cfd70b6be4fbf024aa76c54c93d4bfcdfcbfbec22b98"} Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.011903 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.176524 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckk9p\" (UniqueName: \"kubernetes.io/projected/8c6a6819-7858-49d3-acc8-5b3cf8660213-kube-api-access-ckk9p\") pod \"8c6a6819-7858-49d3-acc8-5b3cf8660213\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.176625 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ceph\") pod \"8c6a6819-7858-49d3-acc8-5b3cf8660213\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.176696 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ssh-key\") pod \"8c6a6819-7858-49d3-acc8-5b3cf8660213\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.176851 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-inventory\") pod \"8c6a6819-7858-49d3-acc8-5b3cf8660213\" (UID: \"8c6a6819-7858-49d3-acc8-5b3cf8660213\") " Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.182199 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ceph" (OuterVolumeSpecName: "ceph") pod "8c6a6819-7858-49d3-acc8-5b3cf8660213" (UID: "8c6a6819-7858-49d3-acc8-5b3cf8660213"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.182908 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6a6819-7858-49d3-acc8-5b3cf8660213-kube-api-access-ckk9p" (OuterVolumeSpecName: "kube-api-access-ckk9p") pod "8c6a6819-7858-49d3-acc8-5b3cf8660213" (UID: "8c6a6819-7858-49d3-acc8-5b3cf8660213"). InnerVolumeSpecName "kube-api-access-ckk9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.202422 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-inventory" (OuterVolumeSpecName: "inventory") pod "8c6a6819-7858-49d3-acc8-5b3cf8660213" (UID: "8c6a6819-7858-49d3-acc8-5b3cf8660213"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.204355 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c6a6819-7858-49d3-acc8-5b3cf8660213" (UID: "8c6a6819-7858-49d3-acc8-5b3cf8660213"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.279321 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.279363 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.279380 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c6a6819-7858-49d3-acc8-5b3cf8660213-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.279391 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckk9p\" (UniqueName: \"kubernetes.io/projected/8c6a6819-7858-49d3-acc8-5b3cf8660213-kube-api-access-ckk9p\") on node \"crc\" DevicePath \"\"" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.603239 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" event={"ID":"8c6a6819-7858-49d3-acc8-5b3cf8660213","Type":"ContainerDied","Data":"3ed35ee1b693762097a1128c30e5aefcdad5b5c2f97942e99473a128899f9e02"} Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.603307 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed35ee1b693762097a1128c30e5aefcdad5b5c2f97942e99473a128899f9e02" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.603309 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.694019 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr"] Dec 06 03:50:47 crc kubenswrapper[4801]: E1206 03:50:47.694698 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6a6819-7858-49d3-acc8-5b3cf8660213" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.694716 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6a6819-7858-49d3-acc8-5b3cf8660213" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.694898 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6a6819-7858-49d3-acc8-5b3cf8660213" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.695459 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.700443 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.700634 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.700739 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.700906 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.701004 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.701128 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.701171 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.701144 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.714516 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr"] Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788082 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788247 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788449 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788524 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788556 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788627 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4bj8\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-kube-api-access-s4bj8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788667 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788722 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788831 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788924 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788963 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.788996 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.789035 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890599 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890664 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890682 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4bj8\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-kube-api-access-s4bj8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890767 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890786 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890811 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890842 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890862 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890879 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890900 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.890965 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.895702 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.895853 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.896387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.896420 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.896664 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.897588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.897795 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.898197 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.898387 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.900672 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.900717 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.910111 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:47 crc kubenswrapper[4801]: I1206 03:50:47.913050 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4bj8\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-kube-api-access-s4bj8\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-grxjr\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:48 crc kubenswrapper[4801]: I1206 03:50:48.026400 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:50:48 crc kubenswrapper[4801]: I1206 03:50:48.506687 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr"] Dec 06 03:50:48 crc kubenswrapper[4801]: W1206 03:50:48.523855 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbcf1692_907f_4ec9_a315_f39d2696c9f0.slice/crio-f496efa44156402a1586dcf887521392a78f10ce2abc2a76993ef10d49156fb6 WatchSource:0}: Error finding container f496efa44156402a1586dcf887521392a78f10ce2abc2a76993ef10d49156fb6: Status 404 returned error can't find the container with id f496efa44156402a1586dcf887521392a78f10ce2abc2a76993ef10d49156fb6 Dec 06 03:50:48 crc kubenswrapper[4801]: I1206 03:50:48.527617 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:50:48 crc kubenswrapper[4801]: I1206 03:50:48.611682 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" event={"ID":"cbcf1692-907f-4ec9-a315-f39d2696c9f0","Type":"ContainerStarted","Data":"f496efa44156402a1586dcf887521392a78f10ce2abc2a76993ef10d49156fb6"} Dec 06 03:50:49 crc kubenswrapper[4801]: I1206 03:50:49.620691 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" event={"ID":"cbcf1692-907f-4ec9-a315-f39d2696c9f0","Type":"ContainerStarted","Data":"52121c2164e8c227146dd4e782b4b407363c108212ebdd8c8a222032231ebdd2"} Dec 06 03:50:49 crc kubenswrapper[4801]: I1206 03:50:49.641159 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" podStartSLOduration=2.224862179 podStartE2EDuration="2.641135898s" podCreationTimestamp="2025-12-06 03:50:47 +0000 UTC" firstStartedPulling="2025-12-06 03:50:48.527318187 +0000 UTC m=+2701.649925759" lastFinishedPulling="2025-12-06 03:50:48.943591906 +0000 UTC m=+2702.066199478" observedRunningTime="2025-12-06 03:50:49.636540423 +0000 UTC m=+2702.759147995" watchObservedRunningTime="2025-12-06 03:50:49.641135898 +0000 UTC m=+2702.763743470" Dec 06 03:51:24 crc kubenswrapper[4801]: I1206 03:51:24.899814 4801 generic.go:334] "Generic (PLEG): container finished" podID="cbcf1692-907f-4ec9-a315-f39d2696c9f0" containerID="52121c2164e8c227146dd4e782b4b407363c108212ebdd8c8a222032231ebdd2" exitCode=0 Dec 06 03:51:24 crc kubenswrapper[4801]: I1206 03:51:24.899913 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" event={"ID":"cbcf1692-907f-4ec9-a315-f39d2696c9f0","Type":"ContainerDied","Data":"52121c2164e8c227146dd4e782b4b407363c108212ebdd8c8a222032231ebdd2"} Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.325160 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.451685 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ceph\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452109 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-neutron-metadata-combined-ca-bundle\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452210 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452294 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ovn-combined-ca-bundle\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452322 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-inventory\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452374 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452412 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-nova-combined-ca-bundle\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452445 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-repo-setup-combined-ca-bundle\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452465 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-libvirt-combined-ca-bundle\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452483 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452519 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ssh-key\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452539 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-bootstrap-combined-ca-bundle\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.452586 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4bj8\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-kube-api-access-s4bj8\") pod \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\" (UID: \"cbcf1692-907f-4ec9-a315-f39d2696c9f0\") " Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.458759 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.459041 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.459676 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.461146 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.461420 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.461719 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ceph" (OuterVolumeSpecName: "ceph") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.461798 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.462717 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.462840 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.467082 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-kube-api-access-s4bj8" (OuterVolumeSpecName: "kube-api-access-s4bj8") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "kube-api-access-s4bj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.489398 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.491172 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-inventory" (OuterVolumeSpecName: "inventory") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.510144 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cbcf1692-907f-4ec9-a315-f39d2696c9f0" (UID: "cbcf1692-907f-4ec9-a315-f39d2696c9f0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555180 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555208 4801 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555219 4801 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555229 4801 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555239 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555249 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555258 4801 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555267 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4bj8\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-kube-api-access-s4bj8\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555276 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555286 4801 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555296 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/cbcf1692-907f-4ec9-a315-f39d2696c9f0-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555306 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.555314 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cbcf1692-907f-4ec9-a315-f39d2696c9f0-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.917166 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" event={"ID":"cbcf1692-907f-4ec9-a315-f39d2696c9f0","Type":"ContainerDied","Data":"f496efa44156402a1586dcf887521392a78f10ce2abc2a76993ef10d49156fb6"} Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.917205 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f496efa44156402a1586dcf887521392a78f10ce2abc2a76993ef10d49156fb6" Dec 06 03:51:26 crc kubenswrapper[4801]: I1206 03:51:26.917255 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-grxjr" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.002141 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974"] Dec 06 03:51:27 crc kubenswrapper[4801]: E1206 03:51:27.002517 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcf1692-907f-4ec9-a315-f39d2696c9f0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.002539 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcf1692-907f-4ec9-a315-f39d2696c9f0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.002704 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcf1692-907f-4ec9-a315-f39d2696c9f0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.003275 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.004955 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.005224 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.005291 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.005383 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.009284 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.017205 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974"] Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.161907 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.162048 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.162253 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nbc\" (UniqueName: \"kubernetes.io/projected/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-kube-api-access-x2nbc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.162357 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.264551 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.264615 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.264795 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2nbc\" (UniqueName: \"kubernetes.io/projected/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-kube-api-access-x2nbc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.264832 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.269666 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.270075 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.270797 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.283179 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2nbc\" (UniqueName: \"kubernetes.io/projected/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-kube-api-access-x2nbc\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-rb974\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.319803 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.840836 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974"] Dec 06 03:51:27 crc kubenswrapper[4801]: I1206 03:51:27.925333 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" event={"ID":"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae","Type":"ContainerStarted","Data":"4879313a229abcbddcc142d91d4bd5ac600061ce5a766b6d77a34fc1abd40648"} Dec 06 03:51:28 crc kubenswrapper[4801]: I1206 03:51:28.934092 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" event={"ID":"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae","Type":"ContainerStarted","Data":"9d661b4018ba98eafa38ca7eeef5666e41dfc77fc4481decc5efa694dd72be81"} Dec 06 03:51:28 crc kubenswrapper[4801]: I1206 03:51:28.956521 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" podStartSLOduration=2.474747733 podStartE2EDuration="2.956500268s" podCreationTimestamp="2025-12-06 03:51:26 +0000 UTC" firstStartedPulling="2025-12-06 03:51:27.836450959 +0000 UTC m=+2740.959058531" lastFinishedPulling="2025-12-06 03:51:28.318203494 +0000 UTC m=+2741.440811066" observedRunningTime="2025-12-06 03:51:28.949208972 +0000 UTC m=+2742.071816544" watchObservedRunningTime="2025-12-06 03:51:28.956500268 +0000 UTC m=+2742.079107840" Dec 06 03:51:34 crc kubenswrapper[4801]: I1206 03:51:34.982881 4801 generic.go:334] "Generic (PLEG): container finished" podID="5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" containerID="9d661b4018ba98eafa38ca7eeef5666e41dfc77fc4481decc5efa694dd72be81" exitCode=0 Dec 06 03:51:34 crc kubenswrapper[4801]: I1206 03:51:34.982959 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" event={"ID":"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae","Type":"ContainerDied","Data":"9d661b4018ba98eafa38ca7eeef5666e41dfc77fc4481decc5efa694dd72be81"} Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.452913 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.556076 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2nbc\" (UniqueName: \"kubernetes.io/projected/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-kube-api-access-x2nbc\") pod \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.556142 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ssh-key\") pod \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.556278 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-inventory\") pod \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.556306 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ceph\") pod \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\" (UID: \"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae\") " Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.562989 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ceph" (OuterVolumeSpecName: "ceph") pod "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" (UID: "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.569070 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-kube-api-access-x2nbc" (OuterVolumeSpecName: "kube-api-access-x2nbc") pod "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" (UID: "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae"). InnerVolumeSpecName "kube-api-access-x2nbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.586234 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" (UID: "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.593399 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-inventory" (OuterVolumeSpecName: "inventory") pod "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" (UID: "5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.658554 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2nbc\" (UniqueName: \"kubernetes.io/projected/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-kube-api-access-x2nbc\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.658595 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.658608 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.658625 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.998879 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" event={"ID":"5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae","Type":"ContainerDied","Data":"4879313a229abcbddcc142d91d4bd5ac600061ce5a766b6d77a34fc1abd40648"} Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.999163 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4879313a229abcbddcc142d91d4bd5ac600061ce5a766b6d77a34fc1abd40648" Dec 06 03:51:36 crc kubenswrapper[4801]: I1206 03:51:36.999134 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-rb974" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.085373 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69"] Dec 06 03:51:37 crc kubenswrapper[4801]: E1206 03:51:37.085781 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.085801 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.086046 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.086731 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.090072 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.090197 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.092831 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.093027 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.093221 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.093372 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.105771 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69"] Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.269839 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.269997 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.270088 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.270188 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.270361 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.270708 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-782kb\" (UniqueName: \"kubernetes.io/projected/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-kube-api-access-782kb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.372786 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.372837 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.372887 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.372957 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-782kb\" (UniqueName: \"kubernetes.io/projected/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-kube-api-access-782kb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.373046 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.373096 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.374405 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.378555 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.379587 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.383541 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.386483 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.392238 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-782kb\" (UniqueName: \"kubernetes.io/projected/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-kube-api-access-782kb\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-trz69\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.403848 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:51:37 crc kubenswrapper[4801]: I1206 03:51:37.957945 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69"] Dec 06 03:51:38 crc kubenswrapper[4801]: I1206 03:51:38.008445 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" event={"ID":"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0","Type":"ContainerStarted","Data":"38257ca20c5b481d61ff72aa11837c579ba927dc7e399543e73f07fc69e29c9e"} Dec 06 03:51:39 crc kubenswrapper[4801]: I1206 03:51:39.017665 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" event={"ID":"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0","Type":"ContainerStarted","Data":"873e55d73c74ed74c9b7b446eb69936c9eddfe3fd6c15a26e98548ff73b10524"} Dec 06 03:51:39 crc kubenswrapper[4801]: I1206 03:51:39.039441 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" podStartSLOduration=1.642340308 podStartE2EDuration="2.039422071s" podCreationTimestamp="2025-12-06 03:51:37 +0000 UTC" firstStartedPulling="2025-12-06 03:51:37.959196146 +0000 UTC m=+2751.081803718" lastFinishedPulling="2025-12-06 03:51:38.356277909 +0000 UTC m=+2751.478885481" observedRunningTime="2025-12-06 03:51:39.035942238 +0000 UTC m=+2752.158549820" watchObservedRunningTime="2025-12-06 03:51:39.039422071 +0000 UTC m=+2752.162029643" Dec 06 03:51:41 crc kubenswrapper[4801]: I1206 03:51:41.170040 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:51:41 crc kubenswrapper[4801]: I1206 03:51:41.170121 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:52:11 crc kubenswrapper[4801]: I1206 03:52:11.169607 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:52:11 crc kubenswrapper[4801]: I1206 03:52:11.170152 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.169280 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.169878 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.169937 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.170687 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c79b390ea3581522085e63ad693438b7dd55b8490e583df33beab6dd02e9de42"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.170785 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://c79b390ea3581522085e63ad693438b7dd55b8490e583df33beab6dd02e9de42" gracePeriod=600 Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.546067 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="c79b390ea3581522085e63ad693438b7dd55b8490e583df33beab6dd02e9de42" exitCode=0 Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.546159 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"c79b390ea3581522085e63ad693438b7dd55b8490e583df33beab6dd02e9de42"} Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.546580 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17"} Dec 06 03:52:41 crc kubenswrapper[4801]: I1206 03:52:41.546604 4801 scope.go:117] "RemoveContainer" containerID="f1b0dd02914953af67990330b964f722df5b14e71231d1d44797566b55024a74" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.241991 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m5lcg"] Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.247190 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.269920 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m5lcg"] Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.417815 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9gs7\" (UniqueName: \"kubernetes.io/projected/fd64cc21-b134-4ee0-a308-8eafd6882bd4-kube-api-access-z9gs7\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.417950 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-utilities\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.418130 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-catalog-content\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.434120 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2z4sg"] Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.439186 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.447899 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2z4sg"] Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.521814 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-catalog-content\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.521897 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-catalog-content\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.522042 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-utilities\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.522181 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjtgk\" (UniqueName: \"kubernetes.io/projected/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-kube-api-access-pjtgk\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.522225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9gs7\" (UniqueName: \"kubernetes.io/projected/fd64cc21-b134-4ee0-a308-8eafd6882bd4-kube-api-access-z9gs7\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.522279 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-utilities\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.522449 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-catalog-content\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.522606 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-utilities\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.544589 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9gs7\" (UniqueName: \"kubernetes.io/projected/fd64cc21-b134-4ee0-a308-8eafd6882bd4-kube-api-access-z9gs7\") pod \"certified-operators-m5lcg\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.578845 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.623416 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjtgk\" (UniqueName: \"kubernetes.io/projected/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-kube-api-access-pjtgk\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.623521 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-catalog-content\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.623578 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-utilities\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.624062 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-utilities\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.624292 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-catalog-content\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.643041 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjtgk\" (UniqueName: \"kubernetes.io/projected/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-kube-api-access-pjtgk\") pod \"community-operators-2z4sg\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.768198 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:44 crc kubenswrapper[4801]: I1206 03:52:44.897708 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m5lcg"] Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.202272 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2z4sg"] Dec 06 03:52:45 crc kubenswrapper[4801]: W1206 03:52:45.261373 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55d292cc_91a0_4fd6_a58c_ba6cf76fec4e.slice/crio-317b27714cd0b130ff82e1a704120a366c8bec1779af61c2e90262e676288672 WatchSource:0}: Error finding container 317b27714cd0b130ff82e1a704120a366c8bec1779af61c2e90262e676288672: Status 404 returned error can't find the container with id 317b27714cd0b130ff82e1a704120a366c8bec1779af61c2e90262e676288672 Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.580416 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerID="3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04" exitCode=0 Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.580474 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m5lcg" event={"ID":"fd64cc21-b134-4ee0-a308-8eafd6882bd4","Type":"ContainerDied","Data":"3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04"} Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.580499 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m5lcg" event={"ID":"fd64cc21-b134-4ee0-a308-8eafd6882bd4","Type":"ContainerStarted","Data":"7d5c19bc533d052bba0c27e6ef4c818a7258a1559ad2c35541c7c62a14d78fd0"} Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.584097 4801 generic.go:334] "Generic (PLEG): container finished" podID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerID="b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f" exitCode=0 Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.584128 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z4sg" event={"ID":"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e","Type":"ContainerDied","Data":"b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f"} Dec 06 03:52:45 crc kubenswrapper[4801]: I1206 03:52:45.584151 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z4sg" event={"ID":"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e","Type":"ContainerStarted","Data":"317b27714cd0b130ff82e1a704120a366c8bec1779af61c2e90262e676288672"} Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.237480 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dkv59"] Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.240126 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.264770 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkv59"] Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.382648 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44hg5\" (UniqueName: \"kubernetes.io/projected/325c682e-37d6-4c2d-9d03-2d2535731505-kube-api-access-44hg5\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.382704 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-catalog-content\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.382746 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-utilities\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.484414 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-catalog-content\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.484530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-utilities\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.484705 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44hg5\" (UniqueName: \"kubernetes.io/projected/325c682e-37d6-4c2d-9d03-2d2535731505-kube-api-access-44hg5\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.484980 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-catalog-content\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.485016 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-utilities\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.509052 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44hg5\" (UniqueName: \"kubernetes.io/projected/325c682e-37d6-4c2d-9d03-2d2535731505-kube-api-access-44hg5\") pod \"redhat-marketplace-dkv59\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:47 crc kubenswrapper[4801]: I1206 03:52:47.580552 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.076846 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkv59"] Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.609893 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerID="15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70" exitCode=0 Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.609977 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m5lcg" event={"ID":"fd64cc21-b134-4ee0-a308-8eafd6882bd4","Type":"ContainerDied","Data":"15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70"} Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.614007 4801 generic.go:334] "Generic (PLEG): container finished" podID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerID="280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85" exitCode=0 Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.614067 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z4sg" event={"ID":"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e","Type":"ContainerDied","Data":"280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85"} Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.617696 4801 generic.go:334] "Generic (PLEG): container finished" podID="325c682e-37d6-4c2d-9d03-2d2535731505" containerID="f52a13d037ab741697978904ceabd12d1dc79400f60842d9b8cdee5bf25c81e5" exitCode=0 Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.617725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerDied","Data":"f52a13d037ab741697978904ceabd12d1dc79400f60842d9b8cdee5bf25c81e5"} Dec 06 03:52:48 crc kubenswrapper[4801]: I1206 03:52:48.617743 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerStarted","Data":"bd399c703a3b2e11002d6512b964a06476eb9e71c64d7ee447502a9a0e3ffd84"} Dec 06 03:52:49 crc kubenswrapper[4801]: I1206 03:52:49.626710 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerStarted","Data":"fe26cb8e816453194cca8b42a995ca8327316e77be830428b6d636d3f63f0b21"} Dec 06 03:52:49 crc kubenswrapper[4801]: I1206 03:52:49.628649 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m5lcg" event={"ID":"fd64cc21-b134-4ee0-a308-8eafd6882bd4","Type":"ContainerStarted","Data":"9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828"} Dec 06 03:52:49 crc kubenswrapper[4801]: I1206 03:52:49.632049 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z4sg" event={"ID":"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e","Type":"ContainerStarted","Data":"83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1"} Dec 06 03:52:49 crc kubenswrapper[4801]: I1206 03:52:49.675164 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2z4sg" podStartSLOduration=2.193338076 podStartE2EDuration="5.675141031s" podCreationTimestamp="2025-12-06 03:52:44 +0000 UTC" firstStartedPulling="2025-12-06 03:52:45.588583609 +0000 UTC m=+2818.711191181" lastFinishedPulling="2025-12-06 03:52:49.070386564 +0000 UTC m=+2822.192994136" observedRunningTime="2025-12-06 03:52:49.669463378 +0000 UTC m=+2822.792070960" watchObservedRunningTime="2025-12-06 03:52:49.675141031 +0000 UTC m=+2822.797748603" Dec 06 03:52:49 crc kubenswrapper[4801]: I1206 03:52:49.711111 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m5lcg" podStartSLOduration=2.186343086 podStartE2EDuration="5.711091241s" podCreationTimestamp="2025-12-06 03:52:44 +0000 UTC" firstStartedPulling="2025-12-06 03:52:45.582151705 +0000 UTC m=+2818.704759277" lastFinishedPulling="2025-12-06 03:52:49.10689986 +0000 UTC m=+2822.229507432" observedRunningTime="2025-12-06 03:52:49.710978778 +0000 UTC m=+2822.833586360" watchObservedRunningTime="2025-12-06 03:52:49.711091241 +0000 UTC m=+2822.833698813" Dec 06 03:52:50 crc kubenswrapper[4801]: I1206 03:52:50.641122 4801 generic.go:334] "Generic (PLEG): container finished" podID="325c682e-37d6-4c2d-9d03-2d2535731505" containerID="fe26cb8e816453194cca8b42a995ca8327316e77be830428b6d636d3f63f0b21" exitCode=0 Dec 06 03:52:50 crc kubenswrapper[4801]: I1206 03:52:50.643069 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerDied","Data":"fe26cb8e816453194cca8b42a995ca8327316e77be830428b6d636d3f63f0b21"} Dec 06 03:52:52 crc kubenswrapper[4801]: I1206 03:52:52.656920 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerStarted","Data":"5a3c9eaffd79d8a740f390dac97aca0b429b7857ab5a99ba53faf5e098b13f9f"} Dec 06 03:52:52 crc kubenswrapper[4801]: I1206 03:52:52.679192 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dkv59" podStartSLOduration=2.3454984469999998 podStartE2EDuration="5.679172947s" podCreationTimestamp="2025-12-06 03:52:47 +0000 UTC" firstStartedPulling="2025-12-06 03:52:48.620596069 +0000 UTC m=+2821.743203641" lastFinishedPulling="2025-12-06 03:52:51.954270569 +0000 UTC m=+2825.076878141" observedRunningTime="2025-12-06 03:52:52.672407184 +0000 UTC m=+2825.795014756" watchObservedRunningTime="2025-12-06 03:52:52.679172947 +0000 UTC m=+2825.801780519" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.579396 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.579788 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.629141 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.675933 4801 generic.go:334] "Generic (PLEG): container finished" podID="0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" containerID="873e55d73c74ed74c9b7b446eb69936c9eddfe3fd6c15a26e98548ff73b10524" exitCode=0 Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.676406 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" event={"ID":"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0","Type":"ContainerDied","Data":"873e55d73c74ed74c9b7b446eb69936c9eddfe3fd6c15a26e98548ff73b10524"} Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.731078 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.769923 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.769990 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:54 crc kubenswrapper[4801]: I1206 03:52:54.815918 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:55 crc kubenswrapper[4801]: I1206 03:52:55.727814 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.156205 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.349040 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ceph\") pod \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.349206 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ssh-key\") pod \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.349384 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovncontroller-config-0\") pod \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.349432 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-782kb\" (UniqueName: \"kubernetes.io/projected/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-kube-api-access-782kb\") pod \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.349467 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-inventory\") pod \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.349541 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovn-combined-ca-bundle\") pod \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\" (UID: \"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0\") " Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.357367 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" (UID: "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.361071 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-kube-api-access-782kb" (OuterVolumeSpecName: "kube-api-access-782kb") pod "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" (UID: "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0"). InnerVolumeSpecName "kube-api-access-782kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.367619 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ceph" (OuterVolumeSpecName: "ceph") pod "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" (UID: "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.379559 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" (UID: "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.382385 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" (UID: "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.384647 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-inventory" (OuterVolumeSpecName: "inventory") pod "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" (UID: "0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.458854 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.459075 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.459134 4801 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.459194 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-782kb\" (UniqueName: \"kubernetes.io/projected/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-kube-api-access-782kb\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.459245 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.459300 4801 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.626776 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m5lcg"] Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.693668 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" event={"ID":"0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0","Type":"ContainerDied","Data":"38257ca20c5b481d61ff72aa11837c579ba927dc7e399543e73f07fc69e29c9e"} Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.693717 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38257ca20c5b481d61ff72aa11837c579ba927dc7e399543e73f07fc69e29c9e" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.693766 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-trz69" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.694124 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m5lcg" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="registry-server" containerID="cri-o://9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828" gracePeriod=2 Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.789886 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5"] Dec 06 03:52:56 crc kubenswrapper[4801]: E1206 03:52:56.790269 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.790286 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.790531 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.791246 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.795542 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.795744 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.796027 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.796027 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.796068 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.796353 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.796404 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.800966 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5"] Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.973148 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.973668 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.973715 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.973776 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5tc\" (UniqueName: \"kubernetes.io/projected/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-kube-api-access-ft5tc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.974025 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.974130 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:56 crc kubenswrapper[4801]: I1206 03:52:56.974185 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075656 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075746 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075820 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075876 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5tc\" (UniqueName: \"kubernetes.io/projected/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-kube-api-access-ft5tc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075970 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.075996 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.081008 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.082070 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.082248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.082397 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.091006 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.091208 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.093569 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5tc\" (UniqueName: \"kubernetes.io/projected/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-kube-api-access-ft5tc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.166829 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.171007 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.176863 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-catalog-content\") pod \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.177068 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9gs7\" (UniqueName: \"kubernetes.io/projected/fd64cc21-b134-4ee0-a308-8eafd6882bd4-kube-api-access-z9gs7\") pod \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.177258 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-utilities\") pod \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\" (UID: \"fd64cc21-b134-4ee0-a308-8eafd6882bd4\") " Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.178325 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-utilities" (OuterVolumeSpecName: "utilities") pod "fd64cc21-b134-4ee0-a308-8eafd6882bd4" (UID: "fd64cc21-b134-4ee0-a308-8eafd6882bd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.180562 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd64cc21-b134-4ee0-a308-8eafd6882bd4-kube-api-access-z9gs7" (OuterVolumeSpecName: "kube-api-access-z9gs7") pod "fd64cc21-b134-4ee0-a308-8eafd6882bd4" (UID: "fd64cc21-b134-4ee0-a308-8eafd6882bd4"). InnerVolumeSpecName "kube-api-access-z9gs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.239303 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd64cc21-b134-4ee0-a308-8eafd6882bd4" (UID: "fd64cc21-b134-4ee0-a308-8eafd6882bd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.267493 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2z4sg"] Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.280331 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.281492 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd64cc21-b134-4ee0-a308-8eafd6882bd4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.281598 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9gs7\" (UniqueName: \"kubernetes.io/projected/fd64cc21-b134-4ee0-a308-8eafd6882bd4-kube-api-access-z9gs7\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.580721 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.581076 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.626471 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.704946 4801 generic.go:334] "Generic (PLEG): container finished" podID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerID="9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828" exitCode=0 Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.705037 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m5lcg" event={"ID":"fd64cc21-b134-4ee0-a308-8eafd6882bd4","Type":"ContainerDied","Data":"9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828"} Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.705694 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m5lcg" event={"ID":"fd64cc21-b134-4ee0-a308-8eafd6882bd4","Type":"ContainerDied","Data":"7d5c19bc533d052bba0c27e6ef4c818a7258a1559ad2c35541c7c62a14d78fd0"} Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.705838 4801 scope.go:117] "RemoveContainer" containerID="9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.705064 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m5lcg" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.706003 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2z4sg" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="registry-server" containerID="cri-o://83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1" gracePeriod=2 Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.739207 4801 scope.go:117] "RemoveContainer" containerID="15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.750527 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m5lcg"] Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.758418 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.759983 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m5lcg"] Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.763801 4801 scope.go:117] "RemoveContainer" containerID="3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.767860 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5"] Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.809045 4801 scope.go:117] "RemoveContainer" containerID="9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828" Dec 06 03:52:57 crc kubenswrapper[4801]: E1206 03:52:57.809467 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828\": container with ID starting with 9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828 not found: ID does not exist" containerID="9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.809501 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828"} err="failed to get container status \"9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828\": rpc error: code = NotFound desc = could not find container \"9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828\": container with ID starting with 9f1f7d8c91cd8cfd456f672a20a647f595446e42e58bbd1d42408675779dd828 not found: ID does not exist" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.809591 4801 scope.go:117] "RemoveContainer" containerID="15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70" Dec 06 03:52:57 crc kubenswrapper[4801]: E1206 03:52:57.810496 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70\": container with ID starting with 15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70 not found: ID does not exist" containerID="15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.810546 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70"} err="failed to get container status \"15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70\": rpc error: code = NotFound desc = could not find container \"15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70\": container with ID starting with 15ed2577dfb66b43ec6df610f801e17a30cc68b0fd33153028ae64b7997f2f70 not found: ID does not exist" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.810563 4801 scope.go:117] "RemoveContainer" containerID="3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04" Dec 06 03:52:57 crc kubenswrapper[4801]: E1206 03:52:57.811265 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04\": container with ID starting with 3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04 not found: ID does not exist" containerID="3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04" Dec 06 03:52:57 crc kubenswrapper[4801]: I1206 03:52:57.811291 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04"} err="failed to get container status \"3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04\": rpc error: code = NotFound desc = could not find container \"3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04\": container with ID starting with 3d2f9a485454c2680ae43d9b615971aa26599b93e593aa695f71f7f27df89e04 not found: ID does not exist" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.652696 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.725581 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-catalog-content\") pod \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.725938 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjtgk\" (UniqueName: \"kubernetes.io/projected/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-kube-api-access-pjtgk\") pod \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.725978 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-utilities\") pod \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\" (UID: \"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e\") " Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.727663 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-utilities" (OuterVolumeSpecName: "utilities") pod "55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" (UID: "55d292cc-91a0-4fd6-a58c-ba6cf76fec4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.731335 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" event={"ID":"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07","Type":"ContainerStarted","Data":"6a12762c167bd51d7698a8168b226dc5d4161c9e27e0a038aa8572392437e008"} Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.731812 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-kube-api-access-pjtgk" (OuterVolumeSpecName: "kube-api-access-pjtgk") pod "55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" (UID: "55d292cc-91a0-4fd6-a58c-ba6cf76fec4e"). InnerVolumeSpecName "kube-api-access-pjtgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.737232 4801 generic.go:334] "Generic (PLEG): container finished" podID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerID="83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1" exitCode=0 Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.737294 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2z4sg" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.737326 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z4sg" event={"ID":"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e","Type":"ContainerDied","Data":"83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1"} Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.737353 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2z4sg" event={"ID":"55d292cc-91a0-4fd6-a58c-ba6cf76fec4e","Type":"ContainerDied","Data":"317b27714cd0b130ff82e1a704120a366c8bec1779af61c2e90262e676288672"} Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.737369 4801 scope.go:117] "RemoveContainer" containerID="83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.760824 4801 scope.go:117] "RemoveContainer" containerID="280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.783560 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" (UID: "55d292cc-91a0-4fd6-a58c-ba6cf76fec4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.790154 4801 scope.go:117] "RemoveContainer" containerID="b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.820540 4801 scope.go:117] "RemoveContainer" containerID="83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1" Dec 06 03:52:58 crc kubenswrapper[4801]: E1206 03:52:58.821150 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1\": container with ID starting with 83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1 not found: ID does not exist" containerID="83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.821223 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1"} err="failed to get container status \"83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1\": rpc error: code = NotFound desc = could not find container \"83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1\": container with ID starting with 83eb786e249637dcf9a5861aa5edfd5f74d4d9e67a06c159f8bb2e07bd8b73e1 not found: ID does not exist" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.821278 4801 scope.go:117] "RemoveContainer" containerID="280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85" Dec 06 03:52:58 crc kubenswrapper[4801]: E1206 03:52:58.821776 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85\": container with ID starting with 280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85 not found: ID does not exist" containerID="280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.821819 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85"} err="failed to get container status \"280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85\": rpc error: code = NotFound desc = could not find container \"280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85\": container with ID starting with 280a770b509d7957978affa35b80af7368af8c581006c6e74a259a64b1d3ba85 not found: ID does not exist" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.821844 4801 scope.go:117] "RemoveContainer" containerID="b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f" Dec 06 03:52:58 crc kubenswrapper[4801]: E1206 03:52:58.822218 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f\": container with ID starting with b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f not found: ID does not exist" containerID="b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.822243 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f"} err="failed to get container status \"b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f\": rpc error: code = NotFound desc = could not find container \"b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f\": container with ID starting with b1d19c6f36d0ce77b83e01533ad91d21b40f537875565351771aa1dea116f50f not found: ID does not exist" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.828917 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.828949 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjtgk\" (UniqueName: \"kubernetes.io/projected/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-kube-api-access-pjtgk\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:58 crc kubenswrapper[4801]: I1206 03:52:58.828962 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:52:59 crc kubenswrapper[4801]: I1206 03:52:59.085324 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2z4sg"] Dec 06 03:52:59 crc kubenswrapper[4801]: I1206 03:52:59.101921 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2z4sg"] Dec 06 03:52:59 crc kubenswrapper[4801]: I1206 03:52:59.226541 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" path="/var/lib/kubelet/pods/55d292cc-91a0-4fd6-a58c-ba6cf76fec4e/volumes" Dec 06 03:52:59 crc kubenswrapper[4801]: I1206 03:52:59.229008 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" path="/var/lib/kubelet/pods/fd64cc21-b134-4ee0-a308-8eafd6882bd4/volumes" Dec 06 03:52:59 crc kubenswrapper[4801]: I1206 03:52:59.746512 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" event={"ID":"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07","Type":"ContainerStarted","Data":"b161bd1ef8c319c3efda8b1b671a6c2b687cc57ff595e021bcfe28dccd028bd5"} Dec 06 03:52:59 crc kubenswrapper[4801]: I1206 03:52:59.769219 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" podStartSLOduration=3.226990388 podStartE2EDuration="3.769200297s" podCreationTimestamp="2025-12-06 03:52:56 +0000 UTC" firstStartedPulling="2025-12-06 03:52:57.77517025 +0000 UTC m=+2830.897777822" lastFinishedPulling="2025-12-06 03:52:58.317380149 +0000 UTC m=+2831.439987731" observedRunningTime="2025-12-06 03:52:59.764316725 +0000 UTC m=+2832.886924307" watchObservedRunningTime="2025-12-06 03:52:59.769200297 +0000 UTC m=+2832.891807859" Dec 06 03:53:01 crc kubenswrapper[4801]: I1206 03:53:01.028926 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkv59"] Dec 06 03:53:01 crc kubenswrapper[4801]: I1206 03:53:01.029409 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dkv59" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="registry-server" containerID="cri-o://5a3c9eaffd79d8a740f390dac97aca0b429b7857ab5a99ba53faf5e098b13f9f" gracePeriod=2 Dec 06 03:53:01 crc kubenswrapper[4801]: I1206 03:53:01.770855 4801 generic.go:334] "Generic (PLEG): container finished" podID="325c682e-37d6-4c2d-9d03-2d2535731505" containerID="5a3c9eaffd79d8a740f390dac97aca0b429b7857ab5a99ba53faf5e098b13f9f" exitCode=0 Dec 06 03:53:01 crc kubenswrapper[4801]: I1206 03:53:01.771041 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerDied","Data":"5a3c9eaffd79d8a740f390dac97aca0b429b7857ab5a99ba53faf5e098b13f9f"} Dec 06 03:53:01 crc kubenswrapper[4801]: I1206 03:53:01.942178 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.085155 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-utilities\") pod \"325c682e-37d6-4c2d-9d03-2d2535731505\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.085361 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44hg5\" (UniqueName: \"kubernetes.io/projected/325c682e-37d6-4c2d-9d03-2d2535731505-kube-api-access-44hg5\") pod \"325c682e-37d6-4c2d-9d03-2d2535731505\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.085406 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-catalog-content\") pod \"325c682e-37d6-4c2d-9d03-2d2535731505\" (UID: \"325c682e-37d6-4c2d-9d03-2d2535731505\") " Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.086361 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-utilities" (OuterVolumeSpecName: "utilities") pod "325c682e-37d6-4c2d-9d03-2d2535731505" (UID: "325c682e-37d6-4c2d-9d03-2d2535731505"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.090515 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325c682e-37d6-4c2d-9d03-2d2535731505-kube-api-access-44hg5" (OuterVolumeSpecName: "kube-api-access-44hg5") pod "325c682e-37d6-4c2d-9d03-2d2535731505" (UID: "325c682e-37d6-4c2d-9d03-2d2535731505"). InnerVolumeSpecName "kube-api-access-44hg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.107971 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "325c682e-37d6-4c2d-9d03-2d2535731505" (UID: "325c682e-37d6-4c2d-9d03-2d2535731505"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.187780 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.187814 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44hg5\" (UniqueName: \"kubernetes.io/projected/325c682e-37d6-4c2d-9d03-2d2535731505-kube-api-access-44hg5\") on node \"crc\" DevicePath \"\"" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.187823 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/325c682e-37d6-4c2d-9d03-2d2535731505-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.787867 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dkv59" event={"ID":"325c682e-37d6-4c2d-9d03-2d2535731505","Type":"ContainerDied","Data":"bd399c703a3b2e11002d6512b964a06476eb9e71c64d7ee447502a9a0e3ffd84"} Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.787936 4801 scope.go:117] "RemoveContainer" containerID="5a3c9eaffd79d8a740f390dac97aca0b429b7857ab5a99ba53faf5e098b13f9f" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.787933 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dkv59" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.833975 4801 scope.go:117] "RemoveContainer" containerID="fe26cb8e816453194cca8b42a995ca8327316e77be830428b6d636d3f63f0b21" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.852044 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkv59"] Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.862603 4801 scope.go:117] "RemoveContainer" containerID="f52a13d037ab741697978904ceabd12d1dc79400f60842d9b8cdee5bf25c81e5" Dec 06 03:53:02 crc kubenswrapper[4801]: I1206 03:53:02.865520 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dkv59"] Dec 06 03:53:03 crc kubenswrapper[4801]: I1206 03:53:03.224857 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" path="/var/lib/kubelet/pods/325c682e-37d6-4c2d-9d03-2d2535731505/volumes" Dec 06 03:54:07 crc kubenswrapper[4801]: I1206 03:54:07.354560 4801 generic.go:334] "Generic (PLEG): container finished" podID="2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" containerID="b161bd1ef8c319c3efda8b1b671a6c2b687cc57ff595e021bcfe28dccd028bd5" exitCode=0 Dec 06 03:54:07 crc kubenswrapper[4801]: I1206 03:54:07.354649 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" event={"ID":"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07","Type":"ContainerDied","Data":"b161bd1ef8c319c3efda8b1b671a6c2b687cc57ff595e021bcfe28dccd028bd5"} Dec 06 03:54:08 crc kubenswrapper[4801]: I1206 03:54:08.816920 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.004730 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-nova-metadata-neutron-config-0\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.005000 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-inventory\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.005037 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-metadata-combined-ca-bundle\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.005073 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5tc\" (UniqueName: \"kubernetes.io/projected/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-kube-api-access-ft5tc\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.005175 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ceph\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.005276 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.005379 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ssh-key\") pod \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\" (UID: \"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07\") " Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.013066 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-kube-api-access-ft5tc" (OuterVolumeSpecName: "kube-api-access-ft5tc") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "kube-api-access-ft5tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.016265 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.028434 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ceph" (OuterVolumeSpecName: "ceph") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.047639 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.048381 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.048636 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.049860 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-inventory" (OuterVolumeSpecName: "inventory") pod "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" (UID: "2f85a4d5-23d5-4e42-ba7e-d05f2062ba07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114686 4801 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114783 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114806 4801 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114825 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5tc\" (UniqueName: \"kubernetes.io/projected/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-kube-api-access-ft5tc\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114841 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114854 4801 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.114872 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f85a4d5-23d5-4e42-ba7e-d05f2062ba07-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.382413 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" event={"ID":"2f85a4d5-23d5-4e42-ba7e-d05f2062ba07","Type":"ContainerDied","Data":"6a12762c167bd51d7698a8168b226dc5d4161c9e27e0a038aa8572392437e008"} Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.382484 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a12762c167bd51d7698a8168b226dc5d4161c9e27e0a038aa8572392437e008" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.382721 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.485804 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs"] Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486472 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="extract-utilities" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486500 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="extract-utilities" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486511 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486520 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486538 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="extract-content" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486547 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="extract-content" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486556 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486565 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486581 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486590 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486609 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="extract-content" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486617 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="extract-content" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486642 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="extract-utilities" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486649 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="extract-utilities" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486664 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="extract-utilities" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486675 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="extract-utilities" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486693 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="extract-content" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486703 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="extract-content" Dec 06 03:54:09 crc kubenswrapper[4801]: E1206 03:54:09.486718 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486725 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486978 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="325c682e-37d6-4c2d-9d03-2d2535731505" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.486996 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f85a4d5-23d5-4e42-ba7e-d05f2062ba07" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.487006 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd64cc21-b134-4ee0-a308-8eafd6882bd4" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.487019 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d292cc-91a0-4fd6-a58c-ba6cf76fec4e" containerName="registry-server" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.487992 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.491557 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.491954 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.492135 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.492332 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.492772 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.493390 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.504494 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs"] Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.624404 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.624491 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.624551 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.624575 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.624637 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg5m8\" (UniqueName: \"kubernetes.io/projected/899596fb-4d4f-419a-be54-3d236d8af270-kube-api-access-cg5m8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.624659 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.726391 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg5m8\" (UniqueName: \"kubernetes.io/projected/899596fb-4d4f-419a-be54-3d236d8af270-kube-api-access-cg5m8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.726442 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.726530 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.726554 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.726610 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.726633 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.731351 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.733639 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.734325 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.735823 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.735909 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.746328 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg5m8\" (UniqueName: \"kubernetes.io/projected/899596fb-4d4f-419a-be54-3d236d8af270-kube-api-access-cg5m8\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-h7khs\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:09 crc kubenswrapper[4801]: I1206 03:54:09.812950 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:54:10 crc kubenswrapper[4801]: I1206 03:54:10.323779 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs"] Dec 06 03:54:10 crc kubenswrapper[4801]: I1206 03:54:10.393005 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" event={"ID":"899596fb-4d4f-419a-be54-3d236d8af270","Type":"ContainerStarted","Data":"0ec1a9032c82f9964044be40c0cbfb53056f4768ec1a28d54c52e6cf7ea3da04"} Dec 06 03:54:11 crc kubenswrapper[4801]: I1206 03:54:11.404602 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" event={"ID":"899596fb-4d4f-419a-be54-3d236d8af270","Type":"ContainerStarted","Data":"9c7dc43af5b7486143faed18a606209e8d857e9bc5595ffb85708c9d4cdf5f3c"} Dec 06 03:54:11 crc kubenswrapper[4801]: I1206 03:54:11.421795 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" podStartSLOduration=2.000129895 podStartE2EDuration="2.42178023s" podCreationTimestamp="2025-12-06 03:54:09 +0000 UTC" firstStartedPulling="2025-12-06 03:54:10.33019483 +0000 UTC m=+2903.452802412" lastFinishedPulling="2025-12-06 03:54:10.751845175 +0000 UTC m=+2903.874452747" observedRunningTime="2025-12-06 03:54:11.421282806 +0000 UTC m=+2904.543890378" watchObservedRunningTime="2025-12-06 03:54:11.42178023 +0000 UTC m=+2904.544387802" Dec 06 03:54:41 crc kubenswrapper[4801]: I1206 03:54:41.169500 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:54:41 crc kubenswrapper[4801]: I1206 03:54:41.170024 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:55:11 crc kubenswrapper[4801]: I1206 03:55:11.169631 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:55:11 crc kubenswrapper[4801]: I1206 03:55:11.170303 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.169499 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.170035 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.170080 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.170769 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.170815 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" gracePeriod=600 Dec 06 03:55:41 crc kubenswrapper[4801]: E1206 03:55:41.290879 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.596035 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" exitCode=0 Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.596377 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17"} Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.596530 4801 scope.go:117] "RemoveContainer" containerID="c79b390ea3581522085e63ad693438b7dd55b8490e583df33beab6dd02e9de42" Dec 06 03:55:41 crc kubenswrapper[4801]: I1206 03:55:41.597578 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:55:41 crc kubenswrapper[4801]: E1206 03:55:41.598231 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:55:56 crc kubenswrapper[4801]: I1206 03:55:56.212874 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:55:56 crc kubenswrapper[4801]: E1206 03:55:56.213583 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:56:11 crc kubenswrapper[4801]: I1206 03:56:11.214549 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:56:11 crc kubenswrapper[4801]: E1206 03:56:11.215459 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:56:24 crc kubenswrapper[4801]: I1206 03:56:24.213435 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:56:24 crc kubenswrapper[4801]: E1206 03:56:24.215073 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:56:37 crc kubenswrapper[4801]: I1206 03:56:37.221276 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:56:37 crc kubenswrapper[4801]: E1206 03:56:37.225474 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:56:51 crc kubenswrapper[4801]: I1206 03:56:51.213098 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:56:51 crc kubenswrapper[4801]: E1206 03:56:51.213794 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:57:05 crc kubenswrapper[4801]: I1206 03:57:05.213649 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:57:05 crc kubenswrapper[4801]: E1206 03:57:05.215195 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:57:20 crc kubenswrapper[4801]: I1206 03:57:20.212349 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:57:20 crc kubenswrapper[4801]: E1206 03:57:20.213261 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:57:33 crc kubenswrapper[4801]: I1206 03:57:33.213895 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:57:33 crc kubenswrapper[4801]: E1206 03:57:33.215134 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:57:44 crc kubenswrapper[4801]: I1206 03:57:44.214066 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:57:44 crc kubenswrapper[4801]: E1206 03:57:44.214991 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:57:59 crc kubenswrapper[4801]: I1206 03:57:59.212719 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:57:59 crc kubenswrapper[4801]: E1206 03:57:59.213903 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:58:12 crc kubenswrapper[4801]: I1206 03:58:12.212013 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:58:12 crc kubenswrapper[4801]: E1206 03:58:12.212659 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:58:25 crc kubenswrapper[4801]: I1206 03:58:25.213141 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:58:25 crc kubenswrapper[4801]: E1206 03:58:25.213957 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:58:36 crc kubenswrapper[4801]: I1206 03:58:36.212799 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:58:36 crc kubenswrapper[4801]: E1206 03:58:36.213628 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:58:47 crc kubenswrapper[4801]: I1206 03:58:47.223962 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:58:47 crc kubenswrapper[4801]: E1206 03:58:47.224649 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:59:00 crc kubenswrapper[4801]: I1206 03:59:00.212088 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:59:00 crc kubenswrapper[4801]: E1206 03:59:00.212701 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:59:15 crc kubenswrapper[4801]: I1206 03:59:15.212964 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:59:15 crc kubenswrapper[4801]: E1206 03:59:15.213811 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:59:26 crc kubenswrapper[4801]: I1206 03:59:26.884194 4801 generic.go:334] "Generic (PLEG): container finished" podID="899596fb-4d4f-419a-be54-3d236d8af270" containerID="9c7dc43af5b7486143faed18a606209e8d857e9bc5595ffb85708c9d4cdf5f3c" exitCode=0 Dec 06 03:59:26 crc kubenswrapper[4801]: I1206 03:59:26.884339 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" event={"ID":"899596fb-4d4f-419a-be54-3d236d8af270","Type":"ContainerDied","Data":"9c7dc43af5b7486143faed18a606209e8d857e9bc5595ffb85708c9d4cdf5f3c"} Dec 06 03:59:27 crc kubenswrapper[4801]: I1206 03:59:27.229003 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:59:27 crc kubenswrapper[4801]: E1206 03:59:27.230227 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.298355 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.424030 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ssh-key\") pod \"899596fb-4d4f-419a-be54-3d236d8af270\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.424188 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-secret-0\") pod \"899596fb-4d4f-419a-be54-3d236d8af270\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.424276 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ceph\") pod \"899596fb-4d4f-419a-be54-3d236d8af270\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.424335 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg5m8\" (UniqueName: \"kubernetes.io/projected/899596fb-4d4f-419a-be54-3d236d8af270-kube-api-access-cg5m8\") pod \"899596fb-4d4f-419a-be54-3d236d8af270\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.424370 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-inventory\") pod \"899596fb-4d4f-419a-be54-3d236d8af270\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.424407 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-combined-ca-bundle\") pod \"899596fb-4d4f-419a-be54-3d236d8af270\" (UID: \"899596fb-4d4f-419a-be54-3d236d8af270\") " Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.430071 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ceph" (OuterVolumeSpecName: "ceph") pod "899596fb-4d4f-419a-be54-3d236d8af270" (UID: "899596fb-4d4f-419a-be54-3d236d8af270"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.430112 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899596fb-4d4f-419a-be54-3d236d8af270-kube-api-access-cg5m8" (OuterVolumeSpecName: "kube-api-access-cg5m8") pod "899596fb-4d4f-419a-be54-3d236d8af270" (UID: "899596fb-4d4f-419a-be54-3d236d8af270"). InnerVolumeSpecName "kube-api-access-cg5m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.436814 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "899596fb-4d4f-419a-be54-3d236d8af270" (UID: "899596fb-4d4f-419a-be54-3d236d8af270"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.449980 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "899596fb-4d4f-419a-be54-3d236d8af270" (UID: "899596fb-4d4f-419a-be54-3d236d8af270"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.452045 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "899596fb-4d4f-419a-be54-3d236d8af270" (UID: "899596fb-4d4f-419a-be54-3d236d8af270"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.459713 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-inventory" (OuterVolumeSpecName: "inventory") pod "899596fb-4d4f-419a-be54-3d236d8af270" (UID: "899596fb-4d4f-419a-be54-3d236d8af270"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.526360 4801 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.526390 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.526402 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg5m8\" (UniqueName: \"kubernetes.io/projected/899596fb-4d4f-419a-be54-3d236d8af270-kube-api-access-cg5m8\") on node \"crc\" DevicePath \"\"" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.526414 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.526424 4801 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.526431 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/899596fb-4d4f-419a-be54-3d236d8af270-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.901662 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" event={"ID":"899596fb-4d4f-419a-be54-3d236d8af270","Type":"ContainerDied","Data":"0ec1a9032c82f9964044be40c0cbfb53056f4768ec1a28d54c52e6cf7ea3da04"} Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.901717 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec1a9032c82f9964044be40c0cbfb53056f4768ec1a28d54c52e6cf7ea3da04" Dec 06 03:59:28 crc kubenswrapper[4801]: I1206 03:59:28.901734 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-h7khs" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.020162 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c"] Dec 06 03:59:29 crc kubenswrapper[4801]: E1206 03:59:29.020861 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899596fb-4d4f-419a-be54-3d236d8af270" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.020886 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="899596fb-4d4f-419a-be54-3d236d8af270" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.021129 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="899596fb-4d4f-419a-be54-3d236d8af270" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.022211 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031062 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031155 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031329 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031368 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031437 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n8qd8" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031483 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031544 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031600 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.031630 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.039035 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c"] Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140596 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbgnf\" (UniqueName: \"kubernetes.io/projected/5651abf8-1969-4df5-a8bf-274fcc9edffe-kube-api-access-dbgnf\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140659 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140700 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140791 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140817 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140843 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140861 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140926 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140953 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.140973 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.141043 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244172 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244309 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbgnf\" (UniqueName: \"kubernetes.io/projected/5651abf8-1969-4df5-a8bf-274fcc9edffe-kube-api-access-dbgnf\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244362 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244457 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244571 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244813 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.244961 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.245060 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.245142 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.245983 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.246465 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.250943 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.251554 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.251570 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.251691 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.252614 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.253300 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.255620 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.258285 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.286541 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbgnf\" (UniqueName: \"kubernetes.io/projected/5651abf8-1969-4df5-a8bf-274fcc9edffe-kube-api-access-dbgnf\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.341450 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.940543 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c"] Dec 06 03:59:29 crc kubenswrapper[4801]: I1206 03:59:29.942649 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 03:59:30 crc kubenswrapper[4801]: I1206 03:59:30.925899 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" event={"ID":"5651abf8-1969-4df5-a8bf-274fcc9edffe","Type":"ContainerStarted","Data":"13728a237fed5f93c04a8c043467e37d527995758cd80a5c302bf8d4ad67b24d"} Dec 06 03:59:31 crc kubenswrapper[4801]: I1206 03:59:31.937004 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" event={"ID":"5651abf8-1969-4df5-a8bf-274fcc9edffe","Type":"ContainerStarted","Data":"d539ff8e1b9d69a3b2675f81f74b517df7b1f411e9f03fbae8e6a24b72402222"} Dec 06 03:59:31 crc kubenswrapper[4801]: I1206 03:59:31.970781 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" podStartSLOduration=2.838462757 podStartE2EDuration="3.97073271s" podCreationTimestamp="2025-12-06 03:59:28 +0000 UTC" firstStartedPulling="2025-12-06 03:59:29.942465513 +0000 UTC m=+3223.065073085" lastFinishedPulling="2025-12-06 03:59:31.074735456 +0000 UTC m=+3224.197343038" observedRunningTime="2025-12-06 03:59:31.965190081 +0000 UTC m=+3225.087797683" watchObservedRunningTime="2025-12-06 03:59:31.97073271 +0000 UTC m=+3225.093340272" Dec 06 03:59:39 crc kubenswrapper[4801]: I1206 03:59:39.228712 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:59:39 crc kubenswrapper[4801]: E1206 03:59:39.229669 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 03:59:54 crc kubenswrapper[4801]: I1206 03:59:54.213178 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 03:59:54 crc kubenswrapper[4801]: E1206 03:59:54.214147 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.159379 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn"] Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.161036 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.163254 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.163964 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.177606 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn"] Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.209043 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2792cd45-43ee-4546-b8c4-027d4cbe5e29-config-volume\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.209097 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn67k\" (UniqueName: \"kubernetes.io/projected/2792cd45-43ee-4546-b8c4-027d4cbe5e29-kube-api-access-mn67k\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.209137 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2792cd45-43ee-4546-b8c4-027d4cbe5e29-secret-volume\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.311476 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn67k\" (UniqueName: \"kubernetes.io/projected/2792cd45-43ee-4546-b8c4-027d4cbe5e29-kube-api-access-mn67k\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.311594 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2792cd45-43ee-4546-b8c4-027d4cbe5e29-secret-volume\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.311953 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2792cd45-43ee-4546-b8c4-027d4cbe5e29-config-volume\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.313378 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2792cd45-43ee-4546-b8c4-027d4cbe5e29-config-volume\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.319796 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2792cd45-43ee-4546-b8c4-027d4cbe5e29-secret-volume\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.333999 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn67k\" (UniqueName: \"kubernetes.io/projected/2792cd45-43ee-4546-b8c4-027d4cbe5e29-kube-api-access-mn67k\") pod \"collect-profiles-29416560-mdjsn\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.483327 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:00 crc kubenswrapper[4801]: I1206 04:00:00.935051 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn"] Dec 06 04:00:01 crc kubenswrapper[4801]: I1206 04:00:01.256454 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" event={"ID":"2792cd45-43ee-4546-b8c4-027d4cbe5e29","Type":"ContainerStarted","Data":"79d416fa2d2546ad2c3223462fe64a4bad9fc7d7f093a9a745e579a227e0e3f7"} Dec 06 04:00:01 crc kubenswrapper[4801]: I1206 04:00:01.256501 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" event={"ID":"2792cd45-43ee-4546-b8c4-027d4cbe5e29","Type":"ContainerStarted","Data":"88992fab6cab55497db42cddf1bb6271ec6a9fb733f3e7e9fdfbfbac5ff0ddb5"} Dec 06 04:00:01 crc kubenswrapper[4801]: I1206 04:00:01.278395 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" podStartSLOduration=1.278378283 podStartE2EDuration="1.278378283s" podCreationTimestamp="2025-12-06 04:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:00:01.270375307 +0000 UTC m=+3254.392982879" watchObservedRunningTime="2025-12-06 04:00:01.278378283 +0000 UTC m=+3254.400985855" Dec 06 04:00:02 crc kubenswrapper[4801]: I1206 04:00:02.267244 4801 generic.go:334] "Generic (PLEG): container finished" podID="2792cd45-43ee-4546-b8c4-027d4cbe5e29" containerID="79d416fa2d2546ad2c3223462fe64a4bad9fc7d7f093a9a745e579a227e0e3f7" exitCode=0 Dec 06 04:00:02 crc kubenswrapper[4801]: I1206 04:00:02.267288 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" event={"ID":"2792cd45-43ee-4546-b8c4-027d4cbe5e29","Type":"ContainerDied","Data":"79d416fa2d2546ad2c3223462fe64a4bad9fc7d7f093a9a745e579a227e0e3f7"} Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.612509 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.782973 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2792cd45-43ee-4546-b8c4-027d4cbe5e29-config-volume\") pod \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.783094 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn67k\" (UniqueName: \"kubernetes.io/projected/2792cd45-43ee-4546-b8c4-027d4cbe5e29-kube-api-access-mn67k\") pod \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.783147 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2792cd45-43ee-4546-b8c4-027d4cbe5e29-secret-volume\") pod \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\" (UID: \"2792cd45-43ee-4546-b8c4-027d4cbe5e29\") " Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.783953 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2792cd45-43ee-4546-b8c4-027d4cbe5e29-config-volume" (OuterVolumeSpecName: "config-volume") pod "2792cd45-43ee-4546-b8c4-027d4cbe5e29" (UID: "2792cd45-43ee-4546-b8c4-027d4cbe5e29"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.788533 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2792cd45-43ee-4546-b8c4-027d4cbe5e29-kube-api-access-mn67k" (OuterVolumeSpecName: "kube-api-access-mn67k") pod "2792cd45-43ee-4546-b8c4-027d4cbe5e29" (UID: "2792cd45-43ee-4546-b8c4-027d4cbe5e29"). InnerVolumeSpecName "kube-api-access-mn67k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.794941 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2792cd45-43ee-4546-b8c4-027d4cbe5e29-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2792cd45-43ee-4546-b8c4-027d4cbe5e29" (UID: "2792cd45-43ee-4546-b8c4-027d4cbe5e29"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.885425 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2792cd45-43ee-4546-b8c4-027d4cbe5e29-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.885463 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn67k\" (UniqueName: \"kubernetes.io/projected/2792cd45-43ee-4546-b8c4-027d4cbe5e29-kube-api-access-mn67k\") on node \"crc\" DevicePath \"\"" Dec 06 04:00:03 crc kubenswrapper[4801]: I1206 04:00:03.885473 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2792cd45-43ee-4546-b8c4-027d4cbe5e29-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:00:04 crc kubenswrapper[4801]: I1206 04:00:04.284569 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" event={"ID":"2792cd45-43ee-4546-b8c4-027d4cbe5e29","Type":"ContainerDied","Data":"88992fab6cab55497db42cddf1bb6271ec6a9fb733f3e7e9fdfbfbac5ff0ddb5"} Dec 06 04:00:04 crc kubenswrapper[4801]: I1206 04:00:04.284897 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88992fab6cab55497db42cddf1bb6271ec6a9fb733f3e7e9fdfbfbac5ff0ddb5" Dec 06 04:00:04 crc kubenswrapper[4801]: I1206 04:00:04.284655 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn" Dec 06 04:00:04 crc kubenswrapper[4801]: I1206 04:00:04.347892 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7"] Dec 06 04:00:04 crc kubenswrapper[4801]: I1206 04:00:04.355730 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416515-fr4t7"] Dec 06 04:00:05 crc kubenswrapper[4801]: I1206 04:00:05.234640 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81dd37a5-79be-462b-83ca-8b4900c7af34" path="/var/lib/kubelet/pods/81dd37a5-79be-462b-83ca-8b4900c7af34/volumes" Dec 06 04:00:06 crc kubenswrapper[4801]: I1206 04:00:06.212216 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 04:00:06 crc kubenswrapper[4801]: E1206 04:00:06.212635 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:00:18 crc kubenswrapper[4801]: I1206 04:00:18.212467 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 04:00:18 crc kubenswrapper[4801]: E1206 04:00:18.213335 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:00:21 crc kubenswrapper[4801]: I1206 04:00:21.485650 4801 scope.go:117] "RemoveContainer" containerID="bc7defeeeb0c84cec11f23e4901699adb2d18fc9e0d3b48c133973d313a47984" Dec 06 04:00:33 crc kubenswrapper[4801]: I1206 04:00:33.212442 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 04:00:33 crc kubenswrapper[4801]: E1206 04:00:33.213131 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:00:48 crc kubenswrapper[4801]: I1206 04:00:48.212187 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 04:00:48 crc kubenswrapper[4801]: I1206 04:00:48.657083 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"ce7d616b613d4c5b6c42a892b482868162d2c2cd72210b0f14d487fb878d9cbe"} Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.145749 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416561-8pbqv"] Dec 06 04:01:00 crc kubenswrapper[4801]: E1206 04:01:00.146610 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2792cd45-43ee-4546-b8c4-027d4cbe5e29" containerName="collect-profiles" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.146623 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2792cd45-43ee-4546-b8c4-027d4cbe5e29" containerName="collect-profiles" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.146829 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2792cd45-43ee-4546-b8c4-027d4cbe5e29" containerName="collect-profiles" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.147529 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.161893 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416561-8pbqv"] Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.237139 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tlrt\" (UniqueName: \"kubernetes.io/projected/0d0b5912-792b-4abb-9d65-2bc033319f4a-kube-api-access-4tlrt\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.237190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-fernet-keys\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.237357 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-config-data\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.237460 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-combined-ca-bundle\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.338779 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tlrt\" (UniqueName: \"kubernetes.io/projected/0d0b5912-792b-4abb-9d65-2bc033319f4a-kube-api-access-4tlrt\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.338834 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-fernet-keys\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.338903 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-config-data\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.338950 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-combined-ca-bundle\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.348734 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-combined-ca-bundle\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.348812 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-config-data\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.349493 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-fernet-keys\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.359239 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tlrt\" (UniqueName: \"kubernetes.io/projected/0d0b5912-792b-4abb-9d65-2bc033319f4a-kube-api-access-4tlrt\") pod \"keystone-cron-29416561-8pbqv\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.509108 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:00 crc kubenswrapper[4801]: I1206 04:01:00.988820 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416561-8pbqv"] Dec 06 04:01:01 crc kubenswrapper[4801]: I1206 04:01:01.772075 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416561-8pbqv" event={"ID":"0d0b5912-792b-4abb-9d65-2bc033319f4a","Type":"ContainerStarted","Data":"7f7f79a1c981a64fb40239fff3b51a39e70866a98dc239be7e22ee13c58b50d7"} Dec 06 04:01:01 crc kubenswrapper[4801]: I1206 04:01:01.772695 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416561-8pbqv" event={"ID":"0d0b5912-792b-4abb-9d65-2bc033319f4a","Type":"ContainerStarted","Data":"5526cf0f2c3937a3ae02ce3dbc457146072cd084e4b88278f54e847ac9074a77"} Dec 06 04:01:01 crc kubenswrapper[4801]: I1206 04:01:01.802502 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416561-8pbqv" podStartSLOduration=1.802480743 podStartE2EDuration="1.802480743s" podCreationTimestamp="2025-12-06 04:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:01:01.792533675 +0000 UTC m=+3314.915141257" watchObservedRunningTime="2025-12-06 04:01:01.802480743 +0000 UTC m=+3314.925088335" Dec 06 04:01:03 crc kubenswrapper[4801]: I1206 04:01:03.787319 4801 generic.go:334] "Generic (PLEG): container finished" podID="0d0b5912-792b-4abb-9d65-2bc033319f4a" containerID="7f7f79a1c981a64fb40239fff3b51a39e70866a98dc239be7e22ee13c58b50d7" exitCode=0 Dec 06 04:01:03 crc kubenswrapper[4801]: I1206 04:01:03.787595 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416561-8pbqv" event={"ID":"0d0b5912-792b-4abb-9d65-2bc033319f4a","Type":"ContainerDied","Data":"7f7f79a1c981a64fb40239fff3b51a39e70866a98dc239be7e22ee13c58b50d7"} Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.196894 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.336181 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-fernet-keys\") pod \"0d0b5912-792b-4abb-9d65-2bc033319f4a\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.336228 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-combined-ca-bundle\") pod \"0d0b5912-792b-4abb-9d65-2bc033319f4a\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.336340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-config-data\") pod \"0d0b5912-792b-4abb-9d65-2bc033319f4a\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.336397 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tlrt\" (UniqueName: \"kubernetes.io/projected/0d0b5912-792b-4abb-9d65-2bc033319f4a-kube-api-access-4tlrt\") pod \"0d0b5912-792b-4abb-9d65-2bc033319f4a\" (UID: \"0d0b5912-792b-4abb-9d65-2bc033319f4a\") " Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.342510 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d0b5912-792b-4abb-9d65-2bc033319f4a-kube-api-access-4tlrt" (OuterVolumeSpecName: "kube-api-access-4tlrt") pod "0d0b5912-792b-4abb-9d65-2bc033319f4a" (UID: "0d0b5912-792b-4abb-9d65-2bc033319f4a"). InnerVolumeSpecName "kube-api-access-4tlrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.350911 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0d0b5912-792b-4abb-9d65-2bc033319f4a" (UID: "0d0b5912-792b-4abb-9d65-2bc033319f4a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.368288 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d0b5912-792b-4abb-9d65-2bc033319f4a" (UID: "0d0b5912-792b-4abb-9d65-2bc033319f4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.399926 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-config-data" (OuterVolumeSpecName: "config-data") pod "0d0b5912-792b-4abb-9d65-2bc033319f4a" (UID: "0d0b5912-792b-4abb-9d65-2bc033319f4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.439437 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tlrt\" (UniqueName: \"kubernetes.io/projected/0d0b5912-792b-4abb-9d65-2bc033319f4a-kube-api-access-4tlrt\") on node \"crc\" DevicePath \"\"" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.439476 4801 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.439486 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:01:05 crc kubenswrapper[4801]: I1206 04:01:05.439496 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0b5912-792b-4abb-9d65-2bc033319f4a-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:01:06 crc kubenswrapper[4801]: I1206 04:01:06.161818 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416561-8pbqv" event={"ID":"0d0b5912-792b-4abb-9d65-2bc033319f4a","Type":"ContainerDied","Data":"5526cf0f2c3937a3ae02ce3dbc457146072cd084e4b88278f54e847ac9074a77"} Dec 06 04:01:06 crc kubenswrapper[4801]: I1206 04:01:06.161862 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5526cf0f2c3937a3ae02ce3dbc457146072cd084e4b88278f54e847ac9074a77" Dec 06 04:01:06 crc kubenswrapper[4801]: I1206 04:01:06.161919 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416561-8pbqv" Dec 06 04:02:57 crc kubenswrapper[4801]: I1206 04:02:57.991913 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xmqlb"] Dec 06 04:02:57 crc kubenswrapper[4801]: E1206 04:02:57.993348 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d0b5912-792b-4abb-9d65-2bc033319f4a" containerName="keystone-cron" Dec 06 04:02:57 crc kubenswrapper[4801]: I1206 04:02:57.993367 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d0b5912-792b-4abb-9d65-2bc033319f4a" containerName="keystone-cron" Dec 06 04:02:57 crc kubenswrapper[4801]: I1206 04:02:57.993579 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d0b5912-792b-4abb-9d65-2bc033319f4a" containerName="keystone-cron" Dec 06 04:02:57 crc kubenswrapper[4801]: I1206 04:02:57.995100 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.015378 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmqlb"] Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.120228 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-utilities\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.120266 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68pbt\" (UniqueName: \"kubernetes.io/projected/bad5a894-a904-48f6-8de6-078d57693310-kube-api-access-68pbt\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.120387 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-catalog-content\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.223085 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-utilities\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.223150 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68pbt\" (UniqueName: \"kubernetes.io/projected/bad5a894-a904-48f6-8de6-078d57693310-kube-api-access-68pbt\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.223255 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-catalog-content\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.223651 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-utilities\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.223711 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-catalog-content\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.240451 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68pbt\" (UniqueName: \"kubernetes.io/projected/bad5a894-a904-48f6-8de6-078d57693310-kube-api-access-68pbt\") pod \"certified-operators-xmqlb\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.337308 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:02:58 crc kubenswrapper[4801]: I1206 04:02:58.823971 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xmqlb"] Dec 06 04:02:59 crc kubenswrapper[4801]: I1206 04:02:59.192007 4801 generic.go:334] "Generic (PLEG): container finished" podID="bad5a894-a904-48f6-8de6-078d57693310" containerID="51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2" exitCode=0 Dec 06 04:02:59 crc kubenswrapper[4801]: I1206 04:02:59.192056 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmqlb" event={"ID":"bad5a894-a904-48f6-8de6-078d57693310","Type":"ContainerDied","Data":"51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2"} Dec 06 04:02:59 crc kubenswrapper[4801]: I1206 04:02:59.192086 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmqlb" event={"ID":"bad5a894-a904-48f6-8de6-078d57693310","Type":"ContainerStarted","Data":"41ae5e60225550b9c77fd0cd68789e2b69dd1205a347ddb47176ec3f73773cd4"} Dec 06 04:03:00 crc kubenswrapper[4801]: I1206 04:03:00.219671 4801 generic.go:334] "Generic (PLEG): container finished" podID="bad5a894-a904-48f6-8de6-078d57693310" containerID="567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d" exitCode=0 Dec 06 04:03:00 crc kubenswrapper[4801]: I1206 04:03:00.219752 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmqlb" event={"ID":"bad5a894-a904-48f6-8de6-078d57693310","Type":"ContainerDied","Data":"567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d"} Dec 06 04:03:01 crc kubenswrapper[4801]: I1206 04:03:01.240777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmqlb" event={"ID":"bad5a894-a904-48f6-8de6-078d57693310","Type":"ContainerStarted","Data":"7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb"} Dec 06 04:03:01 crc kubenswrapper[4801]: I1206 04:03:01.288274 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xmqlb" podStartSLOduration=2.820434491 podStartE2EDuration="4.288251985s" podCreationTimestamp="2025-12-06 04:02:57 +0000 UTC" firstStartedPulling="2025-12-06 04:02:59.19581631 +0000 UTC m=+3432.318423882" lastFinishedPulling="2025-12-06 04:03:00.663633774 +0000 UTC m=+3433.786241376" observedRunningTime="2025-12-06 04:03:01.280079655 +0000 UTC m=+3434.402687227" watchObservedRunningTime="2025-12-06 04:03:01.288251985 +0000 UTC m=+3434.410859567" Dec 06 04:03:08 crc kubenswrapper[4801]: I1206 04:03:08.337853 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:03:08 crc kubenswrapper[4801]: I1206 04:03:08.338582 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:03:08 crc kubenswrapper[4801]: I1206 04:03:08.391047 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:03:09 crc kubenswrapper[4801]: I1206 04:03:09.381454 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:03:09 crc kubenswrapper[4801]: I1206 04:03:09.450577 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmqlb"] Dec 06 04:03:11 crc kubenswrapper[4801]: I1206 04:03:11.170341 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:03:11 crc kubenswrapper[4801]: I1206 04:03:11.170704 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:03:11 crc kubenswrapper[4801]: I1206 04:03:11.339963 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xmqlb" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="registry-server" containerID="cri-o://7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb" gracePeriod=2 Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.285425 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.349828 4801 generic.go:334] "Generic (PLEG): container finished" podID="bad5a894-a904-48f6-8de6-078d57693310" containerID="7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb" exitCode=0 Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.350134 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmqlb" event={"ID":"bad5a894-a904-48f6-8de6-078d57693310","Type":"ContainerDied","Data":"7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb"} Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.350163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xmqlb" event={"ID":"bad5a894-a904-48f6-8de6-078d57693310","Type":"ContainerDied","Data":"41ae5e60225550b9c77fd0cd68789e2b69dd1205a347ddb47176ec3f73773cd4"} Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.350179 4801 scope.go:117] "RemoveContainer" containerID="7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.350304 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xmqlb" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.378064 4801 scope.go:117] "RemoveContainer" containerID="567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.403341 4801 scope.go:117] "RemoveContainer" containerID="51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.415919 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68pbt\" (UniqueName: \"kubernetes.io/projected/bad5a894-a904-48f6-8de6-078d57693310-kube-api-access-68pbt\") pod \"bad5a894-a904-48f6-8de6-078d57693310\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.415973 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-utilities\") pod \"bad5a894-a904-48f6-8de6-078d57693310\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.416142 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-catalog-content\") pod \"bad5a894-a904-48f6-8de6-078d57693310\" (UID: \"bad5a894-a904-48f6-8de6-078d57693310\") " Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.417912 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-utilities" (OuterVolumeSpecName: "utilities") pod "bad5a894-a904-48f6-8de6-078d57693310" (UID: "bad5a894-a904-48f6-8de6-078d57693310"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.422259 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bad5a894-a904-48f6-8de6-078d57693310-kube-api-access-68pbt" (OuterVolumeSpecName: "kube-api-access-68pbt") pod "bad5a894-a904-48f6-8de6-078d57693310" (UID: "bad5a894-a904-48f6-8de6-078d57693310"). InnerVolumeSpecName "kube-api-access-68pbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.484451 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bad5a894-a904-48f6-8de6-078d57693310" (UID: "bad5a894-a904-48f6-8de6-078d57693310"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.500922 4801 scope.go:117] "RemoveContainer" containerID="7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb" Dec 06 04:03:12 crc kubenswrapper[4801]: E1206 04:03:12.501508 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb\": container with ID starting with 7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb not found: ID does not exist" containerID="7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.501556 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb"} err="failed to get container status \"7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb\": rpc error: code = NotFound desc = could not find container \"7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb\": container with ID starting with 7a783f7b4c1fb9921219b55fcfafeb5ef0b5f5af86f8b34998abe5ae721e66cb not found: ID does not exist" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.501587 4801 scope.go:117] "RemoveContainer" containerID="567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d" Dec 06 04:03:12 crc kubenswrapper[4801]: E1206 04:03:12.502333 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d\": container with ID starting with 567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d not found: ID does not exist" containerID="567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.502419 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d"} err="failed to get container status \"567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d\": rpc error: code = NotFound desc = could not find container \"567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d\": container with ID starting with 567ac0b2de8cd72373eb65e2337dc6e42ff1a9d79dc0d722261fe27775e38a1d not found: ID does not exist" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.502451 4801 scope.go:117] "RemoveContainer" containerID="51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2" Dec 06 04:03:12 crc kubenswrapper[4801]: E1206 04:03:12.502807 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2\": container with ID starting with 51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2 not found: ID does not exist" containerID="51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.502840 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2"} err="failed to get container status \"51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2\": rpc error: code = NotFound desc = could not find container \"51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2\": container with ID starting with 51cd83f5c913824d902b67a330ba29f8a761bcf6c1b4af488eec6b498e27f9c2 not found: ID does not exist" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.518245 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68pbt\" (UniqueName: \"kubernetes.io/projected/bad5a894-a904-48f6-8de6-078d57693310-kube-api-access-68pbt\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.518277 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.518290 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad5a894-a904-48f6-8de6-078d57693310-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.691971 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xmqlb"] Dec 06 04:03:12 crc kubenswrapper[4801]: I1206 04:03:12.702822 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xmqlb"] Dec 06 04:03:13 crc kubenswrapper[4801]: I1206 04:03:13.222345 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bad5a894-a904-48f6-8de6-078d57693310" path="/var/lib/kubelet/pods/bad5a894-a904-48f6-8de6-078d57693310/volumes" Dec 06 04:03:18 crc kubenswrapper[4801]: I1206 04:03:18.403049 4801 generic.go:334] "Generic (PLEG): container finished" podID="5651abf8-1969-4df5-a8bf-274fcc9edffe" containerID="d539ff8e1b9d69a3b2675f81f74b517df7b1f411e9f03fbae8e6a24b72402222" exitCode=0 Dec 06 04:03:18 crc kubenswrapper[4801]: I1206 04:03:18.403177 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" event={"ID":"5651abf8-1969-4df5-a8bf-274fcc9edffe","Type":"ContainerDied","Data":"d539ff8e1b9d69a3b2675f81f74b517df7b1f411e9f03fbae8e6a24b72402222"} Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.878039 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.965865 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-extra-config-0\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.965906 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.965938 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-0\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966014 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph-nova-0\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966067 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-0\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966233 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-custom-ceph-combined-ca-bundle\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966299 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ssh-key\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966358 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-inventory\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966382 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbgnf\" (UniqueName: \"kubernetes.io/projected/5651abf8-1969-4df5-a8bf-274fcc9edffe-kube-api-access-dbgnf\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966440 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-1\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.966458 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-1\") pod \"5651abf8-1969-4df5-a8bf-274fcc9edffe\" (UID: \"5651abf8-1969-4df5-a8bf-274fcc9edffe\") " Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.971686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5651abf8-1969-4df5-a8bf-274fcc9edffe-kube-api-access-dbgnf" (OuterVolumeSpecName: "kube-api-access-dbgnf") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "kube-api-access-dbgnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.979787 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph" (OuterVolumeSpecName: "ceph") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.980233 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.991121 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.993879 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.995070 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:19 crc kubenswrapper[4801]: I1206 04:03:19.999709 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.000686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.003729 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.004142 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.020514 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-inventory" (OuterVolumeSpecName: "inventory") pod "5651abf8-1969-4df5-a8bf-274fcc9edffe" (UID: "5651abf8-1969-4df5-a8bf-274fcc9edffe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068684 4801 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068725 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068736 4801 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068745 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbgnf\" (UniqueName: \"kubernetes.io/projected/5651abf8-1969-4df5-a8bf-274fcc9edffe-kube-api-access-dbgnf\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068769 4801 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068779 4801 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068788 4801 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068797 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068805 4801 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068814 4801 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/5651abf8-1969-4df5-a8bf-274fcc9edffe-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.068822 4801 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/5651abf8-1969-4df5-a8bf-274fcc9edffe-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.431170 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" event={"ID":"5651abf8-1969-4df5-a8bf-274fcc9edffe","Type":"ContainerDied","Data":"13728a237fed5f93c04a8c043467e37d527995758cd80a5c302bf8d4ad67b24d"} Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.431224 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13728a237fed5f93c04a8c043467e37d527995758cd80a5c302bf8d4ad67b24d" Dec 06 04:03:20 crc kubenswrapper[4801]: I1206 04:03:20.431294 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.220384 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 06 04:03:34 crc kubenswrapper[4801]: E1206 04:03:34.221456 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="registry-server" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.221476 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="registry-server" Dec 06 04:03:34 crc kubenswrapper[4801]: E1206 04:03:34.221493 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="extract-content" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.221502 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="extract-content" Dec 06 04:03:34 crc kubenswrapper[4801]: E1206 04:03:34.221513 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5651abf8-1969-4df5-a8bf-274fcc9edffe" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.221523 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5651abf8-1969-4df5-a8bf-274fcc9edffe" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 06 04:03:34 crc kubenswrapper[4801]: E1206 04:03:34.221558 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="extract-utilities" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.221567 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="extract-utilities" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.221783 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5651abf8-1969-4df5-a8bf-274fcc9edffe" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.221802 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="bad5a894-a904-48f6-8de6-078d57693310" containerName="registry-server" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.223085 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.224901 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.226201 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.227157 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.228589 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.234868 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.272086 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.272423 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.272530 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.273501 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.287157 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375377 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375712 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49b9\" (UniqueName: \"kubernetes.io/projected/d6100205-050d-4862-b25a-b4152511de4e-kube-api-access-h49b9\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375733 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375751 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-config-data\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375813 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375844 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtvsr\" (UniqueName: \"kubernetes.io/projected/5a1dce19-9384-4038-9e0a-4cfc3de377a6-kube-api-access-vtvsr\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375869 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375899 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-run\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375932 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-run\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375956 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.375990 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-scripts\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376044 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376074 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6100205-050d-4862-b25a-b4152511de4e-ceph\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376099 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376134 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376195 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376229 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376256 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5a1dce19-9384-4038-9e0a-4cfc3de377a6-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376294 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376317 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376334 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376352 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376371 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376387 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376404 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-lib-modules\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376424 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376443 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376459 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376473 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-sys\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376496 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376524 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-dev\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.376845 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.382440 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.383249 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478040 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-run\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478087 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-run\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478114 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478138 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-scripts\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478172 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478192 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-run\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478201 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6100205-050d-4862-b25a-b4152511de4e-ceph\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478247 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478275 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478298 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478172 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-run\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478318 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478360 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478384 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5a1dce19-9384-4038-9e0a-4cfc3de377a6-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478427 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478201 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478442 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478430 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478541 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478580 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478616 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478611 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478639 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478681 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-lib-modules\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478673 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478669 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478672 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478764 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478788 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478810 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-sys\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478815 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478725 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-lib-modules\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478673 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-sys\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478874 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-sys\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478935 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-dev\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478974 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.478997 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h49b9\" (UniqueName: \"kubernetes.io/projected/d6100205-050d-4862-b25a-b4152511de4e-kube-api-access-h49b9\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479021 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479048 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-config-data\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479073 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479084 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479089 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtvsr\" (UniqueName: \"kubernetes.io/projected/5a1dce19-9384-4038-9e0a-4cfc3de377a6-kube-api-access-vtvsr\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479140 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479225 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-dev\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479072 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d6100205-050d-4862-b25a-b4152511de4e-dev\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.479503 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5a1dce19-9384-4038-9e0a-4cfc3de377a6-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.482025 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.482091 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a1dce19-9384-4038-9e0a-4cfc3de377a6-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.482696 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.483501 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/5a1dce19-9384-4038-9e0a-4cfc3de377a6-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.487236 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d6100205-050d-4862-b25a-b4152511de4e-ceph\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.488465 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-config-data\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.488685 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-scripts\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.495284 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6100205-050d-4862-b25a-b4152511de4e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.497787 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h49b9\" (UniqueName: \"kubernetes.io/projected/d6100205-050d-4862-b25a-b4152511de4e-kube-api-access-h49b9\") pod \"cinder-backup-0\" (UID: \"d6100205-050d-4862-b25a-b4152511de4e\") " pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.500431 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtvsr\" (UniqueName: \"kubernetes.io/projected/5a1dce19-9384-4038-9e0a-4cfc3de377a6-kube-api-access-vtvsr\") pod \"cinder-volume-volume1-0\" (UID: \"5a1dce19-9384-4038-9e0a-4cfc3de377a6\") " pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.573137 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.592185 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.867051 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-9f8gs"] Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.868585 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.887036 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba4993-d54d-4bc6-9250-b0a134e34d6d-operator-scripts\") pod \"manila-db-create-9f8gs\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.887134 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp7mn\" (UniqueName: \"kubernetes.io/projected/48ba4993-d54d-4bc6-9250-b0a134e34d6d-kube-api-access-gp7mn\") pod \"manila-db-create-9f8gs\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.890627 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-9f8gs"] Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.916709 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86b4c777b9-r2w76"] Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.918621 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.923490 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.923663 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.923793 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.923971 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-k97ks" Dec 06 04:03:34 crc kubenswrapper[4801]: I1206 04:03:34.928683 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86b4c777b9-r2w76"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.014733 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba4993-d54d-4bc6-9250-b0a134e34d6d-operator-scripts\") pod \"manila-db-create-9f8gs\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.015144 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp7mn\" (UniqueName: \"kubernetes.io/projected/48ba4993-d54d-4bc6-9250-b0a134e34d6d-kube-api-access-gp7mn\") pod \"manila-db-create-9f8gs\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.025898 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba4993-d54d-4bc6-9250-b0a134e34d6d-operator-scripts\") pod \"manila-db-create-9f8gs\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.052319 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-0b98-account-create-update-qs4ph"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.053460 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp7mn\" (UniqueName: \"kubernetes.io/projected/48ba4993-d54d-4bc6-9250-b0a134e34d6d-kube-api-access-gp7mn\") pod \"manila-db-create-9f8gs\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.054114 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.057335 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.074726 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.077893 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.091287 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.091482 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.091527 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.091682 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wvkgz" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.104696 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-0b98-account-create-update-qs4ph"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.116351 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-config-data\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.116443 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvlk8\" (UniqueName: \"kubernetes.io/projected/0cc73ff2-805f-4c1d-8758-125c82a15fdc-kube-api-access-vvlk8\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.116491 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-scripts\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.116531 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cc73ff2-805f-4c1d-8758-125c82a15fdc-horizon-secret-key\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.116566 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc73ff2-805f-4c1d-8758-125c82a15fdc-logs\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.145224 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-786cb8dcb9-wz2q4"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.146744 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.175009 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.186918 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-786cb8dcb9-wz2q4"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.197838 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.202627 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.214819 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218450 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvlk8\" (UniqueName: \"kubernetes.io/projected/0cc73ff2-805f-4c1d-8758-125c82a15fdc-kube-api-access-vvlk8\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-scripts\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218534 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218550 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218567 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218592 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cc73ff2-805f-4c1d-8758-125c82a15fdc-horizon-secret-key\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218617 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc73ff2-805f-4c1d-8758-125c82a15fdc-logs\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218639 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218654 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218676 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218696 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-operator-scripts\") pod \"manila-0b98-account-create-update-qs4ph\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218718 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfd7s\" (UniqueName: \"kubernetes.io/projected/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-kube-api-access-tfd7s\") pod \"manila-0b98-account-create-update-qs4ph\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-config-data\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218805 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzqjz\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-kube-api-access-pzqjz\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218843 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218472 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.218664 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.221709 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc73ff2-805f-4c1d-8758-125c82a15fdc-logs\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.222175 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-scripts\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.223057 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-config-data\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.234385 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cc73ff2-805f-4c1d-8758-125c82a15fdc-horizon-secret-key\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.238007 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.244424 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvlk8\" (UniqueName: \"kubernetes.io/projected/0cc73ff2-805f-4c1d-8758-125c82a15fdc-kube-api-access-vvlk8\") pod \"horizon-86b4c777b9-r2w76\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.254719 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.275694 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.285697 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322175 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322225 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322257 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-logs\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322278 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-operator-scripts\") pod \"manila-0b98-account-create-update-qs4ph\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322300 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfd7s\" (UniqueName: \"kubernetes.io/projected/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-kube-api-access-tfd7s\") pod \"manila-0b98-account-create-update-qs4ph\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322323 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-horizon-secret-key\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322347 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-scripts\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322383 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322402 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-scripts\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322439 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzqjz\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-kube-api-access-pzqjz\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322460 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322478 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-logs\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322494 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322513 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b9pt\" (UniqueName: \"kubernetes.io/projected/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-kube-api-access-7b9pt\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322541 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-config-data\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322559 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322584 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqfp4\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-kube-api-access-kqfp4\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322605 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322621 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322647 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-config-data\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322672 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322687 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322707 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322723 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-ceph\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.322780 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.323246 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-logs\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.324730 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-operator-scripts\") pod \"manila-0b98-account-create-update-qs4ph\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.324962 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.325025 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.326053 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.327525 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-ceph\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.330796 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.334852 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.342678 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.346680 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzqjz\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-kube-api-access-pzqjz\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.347302 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfd7s\" (UniqueName: \"kubernetes.io/projected/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-kube-api-access-tfd7s\") pod \"manila-0b98-account-create-update-qs4ph\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.360972 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.393287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.416173 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424087 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424131 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-logs\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424152 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424178 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b9pt\" (UniqueName: \"kubernetes.io/projected/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-kube-api-access-7b9pt\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424206 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-config-data\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424253 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqfp4\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-kube-api-access-kqfp4\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424277 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424299 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424327 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-config-data\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424362 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-ceph\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424420 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-logs\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424450 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-horizon-secret-key\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424472 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-scripts\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.424499 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-scripts\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.425269 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.426464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-logs\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.427203 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-config-data\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.428696 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-logs\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.429306 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-scripts\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.432430 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-scripts\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.432995 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.433398 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.437718 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-ceph\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.438186 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-config-data\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.439128 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.447292 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-horizon-secret-key\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.454376 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqfp4\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-kube-api-access-kqfp4\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.467487 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b9pt\" (UniqueName: \"kubernetes.io/projected/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-kube-api-access-7b9pt\") pod \"horizon-786cb8dcb9-wz2q4\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.484693 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.491646 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.552274 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.585621 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d6100205-050d-4862-b25a-b4152511de4e","Type":"ContainerStarted","Data":"8ad5dc97cba4bcb2381b319f502e533f9a91d7d6d5f48a8d9d9406089ee5794d"} Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.602699 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5a1dce19-9384-4038-9e0a-4cfc3de377a6","Type":"ContainerStarted","Data":"4b8151004d19c9e9e21aa5dae9a43427da7706cb64704500d6189a8e6e90c32d"} Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.720597 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-9f8gs"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.861220 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86b4c777b9-r2w76"] Dec 06 04:03:35 crc kubenswrapper[4801]: I1206 04:03:35.962091 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-0b98-account-create-update-qs4ph"] Dec 06 04:03:35 crc kubenswrapper[4801]: W1206 04:03:35.977707 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0cc73ff2_805f_4c1d_8758_125c82a15fdc.slice/crio-6f274fddc5507077d17547c6f55362ac2127c711d9ac9e4b496958ede5b305fc WatchSource:0}: Error finding container 6f274fddc5507077d17547c6f55362ac2127c711d9ac9e4b496958ede5b305fc: Status 404 returned error can't find the container with id 6f274fddc5507077d17547c6f55362ac2127c711d9ac9e4b496958ede5b305fc Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.052891 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-786cb8dcb9-wz2q4"] Dec 06 04:03:36 crc kubenswrapper[4801]: W1206 04:03:36.083644 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d6c2bd0_84b0_42b6_bb5a_2f568981b344.slice/crio-4292cbca9be316e519d2e43935af6f1938941a0d0bf61f5a98a94e4be445027f WatchSource:0}: Error finding container 4292cbca9be316e519d2e43935af6f1938941a0d0bf61f5a98a94e4be445027f: Status 404 returned error can't find the container with id 4292cbca9be316e519d2e43935af6f1938941a0d0bf61f5a98a94e4be445027f Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.170707 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:36 crc kubenswrapper[4801]: W1206 04:03:36.219355 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae264233_e409_43bb_ae50_1201f9472d17.slice/crio-ec2a88a703a6458da964a4697d3da1465e92af2c93d1bc2b753434d7717f9e68 WatchSource:0}: Error finding container ec2a88a703a6458da964a4697d3da1465e92af2c93d1bc2b753434d7717f9e68: Status 404 returned error can't find the container with id ec2a88a703a6458da964a4697d3da1465e92af2c93d1bc2b753434d7717f9e68 Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.248690 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.617695 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b4c777b9-r2w76" event={"ID":"0cc73ff2-805f-4c1d-8758-125c82a15fdc","Type":"ContainerStarted","Data":"6f274fddc5507077d17547c6f55362ac2127c711d9ac9e4b496958ede5b305fc"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.624005 4801 generic.go:334] "Generic (PLEG): container finished" podID="5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" containerID="d9d0dd3a375e338467248f6362e763fbb34b0adf484fcbfce079e72b4a53c3cb" exitCode=0 Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.624097 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-0b98-account-create-update-qs4ph" event={"ID":"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e","Type":"ContainerDied","Data":"d9d0dd3a375e338467248f6362e763fbb34b0adf484fcbfce079e72b4a53c3cb"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.624126 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-0b98-account-create-update-qs4ph" event={"ID":"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e","Type":"ContainerStarted","Data":"aab72ede3356581658efd8b8f17a97d50b0795da0e2c86de6a737049de47f287"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.627366 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d6100205-050d-4862-b25a-b4152511de4e","Type":"ContainerStarted","Data":"239ebb1ec12e1b32bec9c171a6f51721a95a375477afce0a200e921bb521bbb1"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.646221 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5a1dce19-9384-4038-9e0a-4cfc3de377a6","Type":"ContainerStarted","Data":"8aafdb4dfe2531471078dfb696b6ad97d80ef7d396c9532c0f42d4737c127188"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.649781 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786cb8dcb9-wz2q4" event={"ID":"7d6c2bd0-84b0-42b6-bb5a-2f568981b344","Type":"ContainerStarted","Data":"4292cbca9be316e519d2e43935af6f1938941a0d0bf61f5a98a94e4be445027f"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.656949 4801 generic.go:334] "Generic (PLEG): container finished" podID="48ba4993-d54d-4bc6-9250-b0a134e34d6d" containerID="c3e5140176a8bfed4bc86d7c2658aaae0323ef978c4f3fe2021b28dd8d61bd5c" exitCode=0 Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.657186 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9f8gs" event={"ID":"48ba4993-d54d-4bc6-9250-b0a134e34d6d","Type":"ContainerDied","Data":"c3e5140176a8bfed4bc86d7c2658aaae0323ef978c4f3fe2021b28dd8d61bd5c"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.657244 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9f8gs" event={"ID":"48ba4993-d54d-4bc6-9250-b0a134e34d6d","Type":"ContainerStarted","Data":"d93f6812efd519f8c1d0f6cf078e035913e69c320f201fb15c1ed979f5424dad"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.661287 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae264233-e409-43bb-ae50-1201f9472d17","Type":"ContainerStarted","Data":"ec2a88a703a6458da964a4697d3da1465e92af2c93d1bc2b753434d7717f9e68"} Dec 06 04:03:36 crc kubenswrapper[4801]: I1206 04:03:36.665153 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2332dc48-2d76-49f0-b0da-a9bd5af0b263","Type":"ContainerStarted","Data":"52f46ba6a18c4b6220c4617b270dbf72157914668ac96531b67d4448cb5fa5c0"} Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.434817 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-786cb8dcb9-wz2q4"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.559147 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55868df668-jxh4g"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.561069 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.577632 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.596969 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.616354 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86b4c777b9-r2w76"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.671717 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55868df668-jxh4g"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.672774 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-secret-key\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.672813 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-logs\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.684980 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-tls-certs\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.685135 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-config-data\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.685153 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5crp\" (UniqueName: \"kubernetes.io/projected/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-kube-api-access-c5crp\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.685190 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-scripts\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.685229 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-combined-ca-bundle\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.723923 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d85575696-vjhxr"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.729461 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.732727 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d85575696-vjhxr"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.786061 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"5a1dce19-9384-4038-9e0a-4cfc3de377a6","Type":"ContainerStarted","Data":"f4f3bf891085189484f2eebadebb93b398df95e825aecfbd44d1101a59aa3edb"} Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.787975 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-tls-certs\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788043 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-config-data\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788060 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5crp\" (UniqueName: \"kubernetes.io/projected/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-kube-api-access-c5crp\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788083 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-scripts\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788106 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-combined-ca-bundle\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788130 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-secret-key\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788160 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-logs\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.788775 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-logs\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.789303 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-config-data\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.789435 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-scripts\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.814624 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.815625 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae264233-e409-43bb-ae50-1201f9472d17","Type":"ContainerStarted","Data":"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48"} Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.816698 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-secret-key\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.824578 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5crp\" (UniqueName: \"kubernetes.io/projected/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-kube-api-access-c5crp\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.832744 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-combined-ca-bundle\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.834522 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2332dc48-2d76-49f0-b0da-a9bd5af0b263","Type":"ContainerStarted","Data":"870b050d3331e37e4b95bd3089cb8c1fa14f8619345a729bec88b1132a3f8a98"} Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.840216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-tls-certs\") pod \"horizon-55868df668-jxh4g\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.840299 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"d6100205-050d-4862-b25a-b4152511de4e","Type":"ContainerStarted","Data":"aaa80352d54e810177d72311b0e39450095c64a3dccf38fa73a78a191fb2ec7d"} Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.860848 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.89623379 podStartE2EDuration="3.860829703s" podCreationTimestamp="2025-12-06 04:03:34 +0000 UTC" firstStartedPulling="2025-12-06 04:03:35.15541822 +0000 UTC m=+3468.278025792" lastFinishedPulling="2025-12-06 04:03:36.120014123 +0000 UTC m=+3469.242621705" observedRunningTime="2025-12-06 04:03:37.859158228 +0000 UTC m=+3470.981765800" watchObservedRunningTime="2025-12-06 04:03:37.860829703 +0000 UTC m=+3470.983437275" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.891832 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a358806-cf3d-4c1c-853a-ab310d0c7058-config-data\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.891881 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a358806-cf3d-4c1c-853a-ab310d0c7058-logs\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.891936 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-combined-ca-bundle\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.892004 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a358806-cf3d-4c1c-853a-ab310d0c7058-scripts\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.892032 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhdm4\" (UniqueName: \"kubernetes.io/projected/3a358806-cf3d-4c1c-853a-ab310d0c7058-kube-api-access-xhdm4\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.892082 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-horizon-tls-certs\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.892111 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-horizon-secret-key\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.968270 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993494 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a358806-cf3d-4c1c-853a-ab310d0c7058-scripts\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993553 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhdm4\" (UniqueName: \"kubernetes.io/projected/3a358806-cf3d-4c1c-853a-ab310d0c7058-kube-api-access-xhdm4\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993589 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-horizon-tls-certs\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993607 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-horizon-secret-key\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993670 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a358806-cf3d-4c1c-853a-ab310d0c7058-config-data\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993691 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a358806-cf3d-4c1c-853a-ab310d0c7058-logs\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.993826 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-combined-ca-bundle\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.998067 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3a358806-cf3d-4c1c-853a-ab310d0c7058-scripts\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.998318 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a358806-cf3d-4c1c-853a-ab310d0c7058-logs\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:37 crc kubenswrapper[4801]: I1206 04:03:37.998999 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3a358806-cf3d-4c1c-853a-ab310d0c7058-config-data\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.001278 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-horizon-secret-key\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.002662 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-combined-ca-bundle\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.003080 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a358806-cf3d-4c1c-853a-ab310d0c7058-horizon-tls-certs\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.013872 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhdm4\" (UniqueName: \"kubernetes.io/projected/3a358806-cf3d-4c1c-853a-ab310d0c7058-kube-api-access-xhdm4\") pod \"horizon-7d85575696-vjhxr\" (UID: \"3a358806-cf3d-4c1c-853a-ab310d0c7058\") " pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.115287 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.351484 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.391842 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.521724206 podStartE2EDuration="4.391818072s" podCreationTimestamp="2025-12-06 04:03:34 +0000 UTC" firstStartedPulling="2025-12-06 04:03:35.249851886 +0000 UTC m=+3468.372459458" lastFinishedPulling="2025-12-06 04:03:36.119945752 +0000 UTC m=+3469.242553324" observedRunningTime="2025-12-06 04:03:37.915192718 +0000 UTC m=+3471.037800290" watchObservedRunningTime="2025-12-06 04:03:38.391818072 +0000 UTC m=+3471.514425644" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.507470 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-operator-scripts\") pod \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.507714 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfd7s\" (UniqueName: \"kubernetes.io/projected/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-kube-api-access-tfd7s\") pod \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\" (UID: \"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e\") " Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.510832 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" (UID: "5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.518066 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-kube-api-access-tfd7s" (OuterVolumeSpecName: "kube-api-access-tfd7s") pod "5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" (UID: "5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e"). InnerVolumeSpecName "kube-api-access-tfd7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.607831 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.609812 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfd7s\" (UniqueName: \"kubernetes.io/projected/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-kube-api-access-tfd7s\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.609836 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:38 crc kubenswrapper[4801]: W1206 04:03:38.699985 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf19d88d7_ec86_4b5f_8c22_b19e3750a4b1.slice/crio-cbc800a393a68271e862845105bcfc33daa90e44ac31100922e99592db905de8 WatchSource:0}: Error finding container cbc800a393a68271e862845105bcfc33daa90e44ac31100922e99592db905de8: Status 404 returned error can't find the container with id cbc800a393a68271e862845105bcfc33daa90e44ac31100922e99592db905de8 Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.713555 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba4993-d54d-4bc6-9250-b0a134e34d6d-operator-scripts\") pod \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.727237 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp7mn\" (UniqueName: \"kubernetes.io/projected/48ba4993-d54d-4bc6-9250-b0a134e34d6d-kube-api-access-gp7mn\") pod \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\" (UID: \"48ba4993-d54d-4bc6-9250-b0a134e34d6d\") " Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.714345 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ba4993-d54d-4bc6-9250-b0a134e34d6d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ba4993-d54d-4bc6-9250-b0a134e34d6d" (UID: "48ba4993-d54d-4bc6-9250-b0a134e34d6d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.729372 4801 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba4993-d54d-4bc6-9250-b0a134e34d6d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.734389 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55868df668-jxh4g"] Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.741651 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ba4993-d54d-4bc6-9250-b0a134e34d6d-kube-api-access-gp7mn" (OuterVolumeSpecName: "kube-api-access-gp7mn") pod "48ba4993-d54d-4bc6-9250-b0a134e34d6d" (UID: "48ba4993-d54d-4bc6-9250-b0a134e34d6d"). InnerVolumeSpecName "kube-api-access-gp7mn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.833412 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp7mn\" (UniqueName: \"kubernetes.io/projected/48ba4993-d54d-4bc6-9250-b0a134e34d6d-kube-api-access-gp7mn\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.865701 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2332dc48-2d76-49f0-b0da-a9bd5af0b263","Type":"ContainerStarted","Data":"24dc5d1787d76e2d3a2cd0e3ed6321025a1616b1844b65bf99f81b5a4d4ce27d"} Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.865902 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-log" containerID="cri-o://870b050d3331e37e4b95bd3089cb8c1fa14f8619345a729bec88b1132a3f8a98" gracePeriod=30 Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.866318 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-httpd" containerID="cri-o://24dc5d1787d76e2d3a2cd0e3ed6321025a1616b1844b65bf99f81b5a4d4ce27d" gracePeriod=30 Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.879437 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-0b98-account-create-update-qs4ph" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.879563 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-0b98-account-create-update-qs4ph" event={"ID":"5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e","Type":"ContainerDied","Data":"aab72ede3356581658efd8b8f17a97d50b0795da0e2c86de6a737049de47f287"} Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.879632 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aab72ede3356581658efd8b8f17a97d50b0795da0e2c86de6a737049de47f287" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.909934 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-9f8gs" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.911652 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-9f8gs" event={"ID":"48ba4993-d54d-4bc6-9250-b0a134e34d6d","Type":"ContainerDied","Data":"d93f6812efd519f8c1d0f6cf078e035913e69c320f201fb15c1ed979f5424dad"} Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.911805 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d93f6812efd519f8c1d0f6cf078e035913e69c320f201fb15c1ed979f5424dad" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.918402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55868df668-jxh4g" event={"ID":"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1","Type":"ContainerStarted","Data":"cbc800a393a68271e862845105bcfc33daa90e44ac31100922e99592db905de8"} Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.922812 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d85575696-vjhxr"] Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.926708 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.926694266 podStartE2EDuration="3.926694266s" podCreationTimestamp="2025-12-06 04:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:03:38.903569432 +0000 UTC m=+3472.026177004" watchObservedRunningTime="2025-12-06 04:03:38.926694266 +0000 UTC m=+3472.049301838" Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.942539 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-log" containerID="cri-o://ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48" gracePeriod=30 Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.942565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae264233-e409-43bb-ae50-1201f9472d17","Type":"ContainerStarted","Data":"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f"} Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.943299 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-httpd" containerID="cri-o://7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f" gracePeriod=30 Dec 06 04:03:38 crc kubenswrapper[4801]: W1206 04:03:38.953783 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a358806_cf3d_4c1c_853a_ab310d0c7058.slice/crio-41e19b2f5669e8c0efdeea06e1f569b3452fecfa7214f6390f862d0da6e97850 WatchSource:0}: Error finding container 41e19b2f5669e8c0efdeea06e1f569b3452fecfa7214f6390f862d0da6e97850: Status 404 returned error can't find the container with id 41e19b2f5669e8c0efdeea06e1f569b3452fecfa7214f6390f862d0da6e97850 Dec 06 04:03:38 crc kubenswrapper[4801]: I1206 04:03:38.998331 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.998298275 podStartE2EDuration="4.998298275s" podCreationTimestamp="2025-12-06 04:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:03:38.980822483 +0000 UTC m=+3472.103430065" watchObservedRunningTime="2025-12-06 04:03:38.998298275 +0000 UTC m=+3472.120905847" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.573724 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.592774 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.666899 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.758739 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-internal-tls-certs\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.758860 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-combined-ca-bundle\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759050 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759103 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-config-data\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759285 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzqjz\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-kube-api-access-pzqjz\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759407 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-ceph\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759465 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-scripts\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759509 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-logs\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.759605 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-httpd-run\") pod \"ae264233-e409-43bb-ae50-1201f9472d17\" (UID: \"ae264233-e409-43bb-ae50-1201f9472d17\") " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.760342 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-logs" (OuterVolumeSpecName: "logs") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.760520 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.767428 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-kube-api-access-pzqjz" (OuterVolumeSpecName: "kube-api-access-pzqjz") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "kube-api-access-pzqjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.767958 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.773430 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-ceph" (OuterVolumeSpecName: "ceph") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.781082 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-scripts" (OuterVolumeSpecName: "scripts") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.816109 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.862901 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.862967 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.863010 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.863031 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzqjz\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-kube-api-access-pzqjz\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.863045 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/ae264233-e409-43bb-ae50-1201f9472d17-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.863056 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.863068 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae264233-e409-43bb-ae50-1201f9472d17-logs\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.877384 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-config-data" (OuterVolumeSpecName: "config-data") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.886984 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae264233-e409-43bb-ae50-1201f9472d17" (UID: "ae264233-e409-43bb-ae50-1201f9472d17"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.899466 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960539 4801 generic.go:334] "Generic (PLEG): container finished" podID="ae264233-e409-43bb-ae50-1201f9472d17" containerID="7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f" exitCode=0 Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960602 4801 generic.go:334] "Generic (PLEG): container finished" podID="ae264233-e409-43bb-ae50-1201f9472d17" containerID="ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48" exitCode=143 Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960664 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae264233-e409-43bb-ae50-1201f9472d17","Type":"ContainerDied","Data":"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f"} Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960711 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae264233-e409-43bb-ae50-1201f9472d17","Type":"ContainerDied","Data":"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48"} Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960726 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ae264233-e409-43bb-ae50-1201f9472d17","Type":"ContainerDied","Data":"ec2a88a703a6458da964a4697d3da1465e92af2c93d1bc2b753434d7717f9e68"} Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960753 4801 scope.go:117] "RemoveContainer" containerID="7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.960981 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.965879 4801 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.965931 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.965946 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae264233-e409-43bb-ae50-1201f9472d17-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.982306 4801 generic.go:334] "Generic (PLEG): container finished" podID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerID="24dc5d1787d76e2d3a2cd0e3ed6321025a1616b1844b65bf99f81b5a4d4ce27d" exitCode=0 Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.982340 4801 generic.go:334] "Generic (PLEG): container finished" podID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerID="870b050d3331e37e4b95bd3089cb8c1fa14f8619345a729bec88b1132a3f8a98" exitCode=143 Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.982383 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2332dc48-2d76-49f0-b0da-a9bd5af0b263","Type":"ContainerDied","Data":"24dc5d1787d76e2d3a2cd0e3ed6321025a1616b1844b65bf99f81b5a4d4ce27d"} Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.982420 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2332dc48-2d76-49f0-b0da-a9bd5af0b263","Type":"ContainerDied","Data":"870b050d3331e37e4b95bd3089cb8c1fa14f8619345a729bec88b1132a3f8a98"} Dec 06 04:03:39 crc kubenswrapper[4801]: I1206 04:03:39.984890 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d85575696-vjhxr" event={"ID":"3a358806-cf3d-4c1c-853a-ab310d0c7058","Type":"ContainerStarted","Data":"41e19b2f5669e8c0efdeea06e1f569b3452fecfa7214f6390f862d0da6e97850"} Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.013041 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.043039 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.061767 4801 scope.go:117] "RemoveContainer" containerID="ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.074249 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.087994 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-httpd" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088042 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-httpd" Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.088102 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ba4993-d54d-4bc6-9250-b0a134e34d6d" containerName="mariadb-database-create" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088109 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ba4993-d54d-4bc6-9250-b0a134e34d6d" containerName="mariadb-database-create" Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.088121 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-log" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088127 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-log" Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.088139 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" containerName="mariadb-account-create-update" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088146 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" containerName="mariadb-account-create-update" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088450 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-httpd" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088468 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae264233-e409-43bb-ae50-1201f9472d17" containerName="glance-log" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088481 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ba4993-d54d-4bc6-9250-b0a134e34d6d" containerName="mariadb-database-create" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.088500 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" containerName="mariadb-account-create-update" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.089712 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.089829 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.096654 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.096897 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.098151 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.136375 4801 scope.go:117] "RemoveContainer" containerID="7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f" Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.137278 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f\": container with ID starting with 7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f not found: ID does not exist" containerID="7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.137348 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f"} err="failed to get container status \"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f\": rpc error: code = NotFound desc = could not find container \"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f\": container with ID starting with 7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f not found: ID does not exist" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.137388 4801 scope.go:117] "RemoveContainer" containerID="ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48" Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.140493 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48\": container with ID starting with ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48 not found: ID does not exist" containerID="ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.140548 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48"} err="failed to get container status \"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48\": rpc error: code = NotFound desc = could not find container \"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48\": container with ID starting with ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48 not found: ID does not exist" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.140584 4801 scope.go:117] "RemoveContainer" containerID="7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.140970 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f"} err="failed to get container status \"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f\": rpc error: code = NotFound desc = could not find container \"7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f\": container with ID starting with 7bbe64bad45e4ad5270d8396f784db8fc1b25673e9175391e6ec0a815d050b4f not found: ID does not exist" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.141010 4801 scope.go:117] "RemoveContainer" containerID="ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.141342 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48"} err="failed to get container status \"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48\": rpc error: code = NotFound desc = could not find container \"ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48\": container with ID starting with ebb9680d3471ed67f1c5eaa8b468fd16405e1fb67f28409f0ce6ce43a21d7a48 not found: ID does not exist" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.170667 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-httpd-run\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.170811 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-config-data\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.170911 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.170968 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-public-tls-certs\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171020 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-combined-ca-bundle\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171069 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-logs\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171112 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqfp4\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-kube-api-access-kqfp4\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171176 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-ceph\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171219 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-scripts\") pod \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\" (UID: \"2332dc48-2d76-49f0-b0da-a9bd5af0b263\") " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171472 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171508 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9763c19c-a748-434f-a868-af381202b97e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171558 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9763c19c-a748-434f-a868-af381202b97e-logs\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171596 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m7cd\" (UniqueName: \"kubernetes.io/projected/9763c19c-a748-434f-a868-af381202b97e-kube-api-access-8m7cd\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171645 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171702 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171723 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171766 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.171789 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9763c19c-a748-434f-a868-af381202b97e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.172354 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.175040 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-logs" (OuterVolumeSpecName: "logs") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.210009 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-ceph" (OuterVolumeSpecName: "ceph") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.210885 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.210986 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-scripts" (OuterVolumeSpecName: "scripts") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.212272 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-kube-api-access-kqfp4" (OuterVolumeSpecName: "kube-api-access-kqfp4") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "kube-api-access-kqfp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.222378 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.271134 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-mk49l"] Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.271544 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-log" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.271557 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-log" Dec 06 04:03:40 crc kubenswrapper[4801]: E1206 04:03:40.271574 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-httpd" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.271580 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-httpd" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.271788 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-log" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.271805 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" containerName="glance-httpd" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.272162 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.272502 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.278149 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.278380 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.278555 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.278659 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9763c19c-a748-434f-a868-af381202b97e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.278805 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.278914 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9763c19c-a748-434f-a868-af381202b97e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.279054 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9763c19c-a748-434f-a868-af381202b97e-logs\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.279433 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m7cd\" (UniqueName: \"kubernetes.io/projected/9763c19c-a748-434f-a868-af381202b97e-kube-api-access-8m7cd\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.279609 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.281514 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9763c19c-a748-434f-a868-af381202b97e-logs\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.282005 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9763c19c-a748-434f-a868-af381202b97e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.283878 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284152 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-t728x" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284782 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284834 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284852 4801 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284893 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284911 4801 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284927 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284940 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2332dc48-2d76-49f0-b0da-a9bd5af0b263-logs\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.284953 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqfp4\" (UniqueName: \"kubernetes.io/projected/2332dc48-2d76-49f0-b0da-a9bd5af0b263-kube-api-access-kqfp4\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.285406 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.289806 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.291453 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9763c19c-a748-434f-a868-af381202b97e-ceph\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.291959 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.299911 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.301512 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9763c19c-a748-434f-a868-af381202b97e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.303393 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m7cd\" (UniqueName: \"kubernetes.io/projected/9763c19c-a748-434f-a868-af381202b97e-kube-api-access-8m7cd\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.308428 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mk49l"] Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.344238 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-config-data" (OuterVolumeSpecName: "config-data") pod "2332dc48-2d76-49f0-b0da-a9bd5af0b263" (UID: "2332dc48-2d76-49f0-b0da-a9bd5af0b263"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.344632 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"9763c19c-a748-434f-a868-af381202b97e\") " pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.385309 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.389344 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-job-config-data\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.389551 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-config-data\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.389823 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-combined-ca-bundle\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.390075 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjtsf\" (UniqueName: \"kubernetes.io/projected/34ed2fd1-0b46-478f-b8f6-013c6744778d-kube-api-access-fjtsf\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.390231 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2332dc48-2d76-49f0-b0da-a9bd5af0b263-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.390269 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.412376 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.491354 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-combined-ca-bundle\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.491838 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjtsf\" (UniqueName: \"kubernetes.io/projected/34ed2fd1-0b46-478f-b8f6-013c6744778d-kube-api-access-fjtsf\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.491873 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-job-config-data\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.491920 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-config-data\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.497871 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-config-data\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.499800 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-combined-ca-bundle\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.503216 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-job-config-data\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.524634 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjtsf\" (UniqueName: \"kubernetes.io/projected/34ed2fd1-0b46-478f-b8f6-013c6744778d-kube-api-access-fjtsf\") pod \"manila-db-sync-mk49l\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.610472 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mk49l" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.998422 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2332dc48-2d76-49f0-b0da-a9bd5af0b263","Type":"ContainerDied","Data":"52f46ba6a18c4b6220c4617b270dbf72157914668ac96531b67d4448cb5fa5c0"} Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.998524 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 04:03:40 crc kubenswrapper[4801]: I1206 04:03:40.998677 4801 scope.go:117] "RemoveContainer" containerID="24dc5d1787d76e2d3a2cd0e3ed6321025a1616b1844b65bf99f81b5a4d4ce27d" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.016026 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.053248 4801 scope.go:117] "RemoveContainer" containerID="870b050d3331e37e4b95bd3089cb8c1fa14f8619345a729bec88b1132a3f8a98" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.060830 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.080119 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.106080 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.118383 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.118531 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.132294 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.132389 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.171930 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.171985 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.192866 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-mk49l"] Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208321 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208393 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208416 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208440 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208478 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208510 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c57z\" (UniqueName: \"kubernetes.io/projected/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-kube-api-access-9c57z\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208550 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208603 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-ceph\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.208643 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-logs\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: W1206 04:03:41.236191 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34ed2fd1_0b46_478f_b8f6_013c6744778d.slice/crio-e6af239b4cb020515b3e597e068688b6ceef86077ab54cff58528ac4cab82ffb WatchSource:0}: Error finding container e6af239b4cb020515b3e597e068688b6ceef86077ab54cff58528ac4cab82ffb: Status 404 returned error can't find the container with id e6af239b4cb020515b3e597e068688b6ceef86077ab54cff58528ac4cab82ffb Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.238459 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2332dc48-2d76-49f0-b0da-a9bd5af0b263" path="/var/lib/kubelet/pods/2332dc48-2d76-49f0-b0da-a9bd5af0b263/volumes" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.239167 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae264233-e409-43bb-ae50-1201f9472d17" path="/var/lib/kubelet/pods/ae264233-e409-43bb-ae50-1201f9472d17/volumes" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310065 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-logs\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310130 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310169 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310187 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310207 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310250 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310278 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c57z\" (UniqueName: \"kubernetes.io/projected/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-kube-api-access-9c57z\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310321 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.310378 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-ceph\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.312107 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-logs\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.313073 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.313392 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.323903 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.325537 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-scripts\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.325537 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.327917 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-ceph\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.333783 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-config-data\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.339534 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c57z\" (UniqueName: \"kubernetes.io/projected/2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54-kube-api-access-9c57z\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.380657 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54\") " pod="openstack/glance-default-external-api-0" Dec 06 04:03:41 crc kubenswrapper[4801]: I1206 04:03:41.462428 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 04:03:42 crc kubenswrapper[4801]: I1206 04:03:42.018163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mk49l" event={"ID":"34ed2fd1-0b46-478f-b8f6-013c6744778d","Type":"ContainerStarted","Data":"e6af239b4cb020515b3e597e068688b6ceef86077ab54cff58528ac4cab82ffb"} Dec 06 04:03:42 crc kubenswrapper[4801]: I1206 04:03:42.029308 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9763c19c-a748-434f-a868-af381202b97e","Type":"ContainerStarted","Data":"78c660d23b3c5aa2bfcd1a5283b14408b05d5d5f132f92fad5bc42d9857aec55"} Dec 06 04:03:42 crc kubenswrapper[4801]: I1206 04:03:42.152309 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 04:03:43 crc kubenswrapper[4801]: I1206 04:03:43.050795 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54","Type":"ContainerStarted","Data":"7a9d716957b8f2a259cf0c17acefa2d7222dac27962d7c7a3b89cfb37e8c3273"} Dec 06 04:03:43 crc kubenswrapper[4801]: I1206 04:03:43.055401 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9763c19c-a748-434f-a868-af381202b97e","Type":"ContainerStarted","Data":"2fd9c1b26b92411b958110fb0d365f1f6d6526c09858f3f4d8a27979e1825080"} Dec 06 04:03:44 crc kubenswrapper[4801]: I1206 04:03:44.075474 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"9763c19c-a748-434f-a868-af381202b97e","Type":"ContainerStarted","Data":"2d817264124e6664422b66d7348f6880e6bd79a03331236b4805a216a2ec835c"} Dec 06 04:03:44 crc kubenswrapper[4801]: I1206 04:03:44.078801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54","Type":"ContainerStarted","Data":"79f3dbacc583e81e544dd3d26a175987ab7e6a4dab6023a4b05c6c372a976d29"} Dec 06 04:03:44 crc kubenswrapper[4801]: I1206 04:03:44.109375 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.109354662 podStartE2EDuration="4.109354662s" podCreationTimestamp="2025-12-06 04:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:03:44.106109554 +0000 UTC m=+3477.228717146" watchObservedRunningTime="2025-12-06 04:03:44.109354662 +0000 UTC m=+3477.231962234" Dec 06 04:03:44 crc kubenswrapper[4801]: I1206 04:03:44.824783 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 06 04:03:44 crc kubenswrapper[4801]: I1206 04:03:44.866559 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 06 04:03:50 crc kubenswrapper[4801]: I1206 04:03:50.413634 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:50 crc kubenswrapper[4801]: I1206 04:03:50.414258 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:50 crc kubenswrapper[4801]: I1206 04:03:50.446827 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:50 crc kubenswrapper[4801]: I1206 04:03:50.460638 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:51 crc kubenswrapper[4801]: I1206 04:03:51.176135 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:51 crc kubenswrapper[4801]: I1206 04:03:51.176205 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:53 crc kubenswrapper[4801]: I1206 04:03:53.186216 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:53 crc kubenswrapper[4801]: I1206 04:03:53.195394 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 04:03:53 crc kubenswrapper[4801]: I1206 04:03:53.206915 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.208413 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55868df668-jxh4g" event={"ID":"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1","Type":"ContainerStarted","Data":"9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.208928 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55868df668-jxh4g" event={"ID":"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1","Type":"ContainerStarted","Data":"e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.217375 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mk49l" event={"ID":"34ed2fd1-0b46-478f-b8f6-013c6744778d","Type":"ContainerStarted","Data":"6f8ebcfddcc43acd0ec256d80d7a437b11b70f76a371d9624ee8fb83eeefc6ca"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.222894 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786cb8dcb9-wz2q4" event={"ID":"7d6c2bd0-84b0-42b6-bb5a-2f568981b344","Type":"ContainerStarted","Data":"c95111234899a63892976c545a97542d960aeb3dbd614c1ff1e9ce8ac12285f3"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.222943 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786cb8dcb9-wz2q4" event={"ID":"7d6c2bd0-84b0-42b6-bb5a-2f568981b344","Type":"ContainerStarted","Data":"8e7c42c883f27aa4940a83a9f23c010d5b2f9fb4bb016d84d9637a11b2a3d309"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.223076 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-786cb8dcb9-wz2q4" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon-log" containerID="cri-o://8e7c42c883f27aa4940a83a9f23c010d5b2f9fb4bb016d84d9637a11b2a3d309" gracePeriod=30 Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.223211 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-786cb8dcb9-wz2q4" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon" containerID="cri-o://c95111234899a63892976c545a97542d960aeb3dbd614c1ff1e9ce8ac12285f3" gracePeriod=30 Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.230330 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b4c777b9-r2w76" event={"ID":"0cc73ff2-805f-4c1d-8758-125c82a15fdc","Type":"ContainerStarted","Data":"c5b1d76e4db28094ba85b1a137d25016c64cf6fb09cfd586759bf152daa8335a"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.230380 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b4c777b9-r2w76" event={"ID":"0cc73ff2-805f-4c1d-8758-125c82a15fdc","Type":"ContainerStarted","Data":"96c6148d942ded3d99feb1ccef7632ba67942118f83d6ea56823e6b3eb842a38"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.230838 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86b4c777b9-r2w76" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon" containerID="cri-o://c5b1d76e4db28094ba85b1a137d25016c64cf6fb09cfd586759bf152daa8335a" gracePeriod=30 Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.230834 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-86b4c777b9-r2w76" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon-log" containerID="cri-o://96c6148d942ded3d99feb1ccef7632ba67942118f83d6ea56823e6b3eb842a38" gracePeriod=30 Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.237525 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d85575696-vjhxr" event={"ID":"3a358806-cf3d-4c1c-853a-ab310d0c7058","Type":"ContainerStarted","Data":"e6c77fb975fa2f0c004447ebc85e7c7a00ac77ae676c9f775882b11887494296"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.237571 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d85575696-vjhxr" event={"ID":"3a358806-cf3d-4c1c-853a-ab310d0c7058","Type":"ContainerStarted","Data":"6280325d40a4ebc56d29f63ad1abcf9321d8a3eecc31608bd5689112c8767d74"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.246458 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54","Type":"ContainerStarted","Data":"7aa7cae000aa850d4e80e4605cd1205bb98a118cd4df016b7fad527c7247df83"} Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.249100 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55868df668-jxh4g" podStartSLOduration=2.510488157 podStartE2EDuration="17.249078986s" podCreationTimestamp="2025-12-06 04:03:37 +0000 UTC" firstStartedPulling="2025-12-06 04:03:38.71795092 +0000 UTC m=+3471.840558492" lastFinishedPulling="2025-12-06 04:03:53.456541759 +0000 UTC m=+3486.579149321" observedRunningTime="2025-12-06 04:03:54.242065677 +0000 UTC m=+3487.364673259" watchObservedRunningTime="2025-12-06 04:03:54.249078986 +0000 UTC m=+3487.371686558" Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.267716 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-786cb8dcb9-wz2q4" podStartSLOduration=2.924498152 podStartE2EDuration="20.267694107s" podCreationTimestamp="2025-12-06 04:03:34 +0000 UTC" firstStartedPulling="2025-12-06 04:03:36.109586293 +0000 UTC m=+3469.232193865" lastFinishedPulling="2025-12-06 04:03:53.452782248 +0000 UTC m=+3486.575389820" observedRunningTime="2025-12-06 04:03:54.264998864 +0000 UTC m=+3487.387606436" watchObservedRunningTime="2025-12-06 04:03:54.267694107 +0000 UTC m=+3487.390301689" Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.318498 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-mk49l" podStartSLOduration=2.100088168 podStartE2EDuration="14.318472616s" podCreationTimestamp="2025-12-06 04:03:40 +0000 UTC" firstStartedPulling="2025-12-06 04:03:41.239048726 +0000 UTC m=+3474.361656298" lastFinishedPulling="2025-12-06 04:03:53.457433134 +0000 UTC m=+3486.580040746" observedRunningTime="2025-12-06 04:03:54.31008075 +0000 UTC m=+3487.432688322" watchObservedRunningTime="2025-12-06 04:03:54.318472616 +0000 UTC m=+3487.441080198" Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.319882 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-86b4c777b9-r2w76" podStartSLOduration=2.84538232 podStartE2EDuration="20.319871034s" podCreationTimestamp="2025-12-06 04:03:34 +0000 UTC" firstStartedPulling="2025-12-06 04:03:35.97997542 +0000 UTC m=+3469.102582992" lastFinishedPulling="2025-12-06 04:03:53.454464134 +0000 UTC m=+3486.577071706" observedRunningTime="2025-12-06 04:03:54.289634959 +0000 UTC m=+3487.412242551" watchObservedRunningTime="2025-12-06 04:03:54.319871034 +0000 UTC m=+3487.442478616" Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.375640 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d85575696-vjhxr" podStartSLOduration=2.8873196 podStartE2EDuration="17.375622095s" podCreationTimestamp="2025-12-06 04:03:37 +0000 UTC" firstStartedPulling="2025-12-06 04:03:38.961935745 +0000 UTC m=+3472.084543317" lastFinishedPulling="2025-12-06 04:03:53.45023824 +0000 UTC m=+3486.572845812" observedRunningTime="2025-12-06 04:03:54.331202089 +0000 UTC m=+3487.453809681" watchObservedRunningTime="2025-12-06 04:03:54.375622095 +0000 UTC m=+3487.498229667" Dec 06 04:03:54 crc kubenswrapper[4801]: I1206 04:03:54.387932 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.387913317 podStartE2EDuration="13.387913317s" podCreationTimestamp="2025-12-06 04:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:03:54.37428567 +0000 UTC m=+3487.496893242" watchObservedRunningTime="2025-12-06 04:03:54.387913317 +0000 UTC m=+3487.510520889" Dec 06 04:03:55 crc kubenswrapper[4801]: I1206 04:03:55.279353 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:03:55 crc kubenswrapper[4801]: I1206 04:03:55.485444 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:03:57 crc kubenswrapper[4801]: I1206 04:03:57.969342 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:57 crc kubenswrapper[4801]: I1206 04:03:57.969939 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:03:58 crc kubenswrapper[4801]: I1206 04:03:58.115896 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:03:58 crc kubenswrapper[4801]: I1206 04:03:58.116180 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:04:01 crc kubenswrapper[4801]: I1206 04:04:01.462873 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 04:04:01 crc kubenswrapper[4801]: I1206 04:04:01.463188 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 04:04:01 crc kubenswrapper[4801]: I1206 04:04:01.513412 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 04:04:01 crc kubenswrapper[4801]: I1206 04:04:01.522431 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 04:04:02 crc kubenswrapper[4801]: I1206 04:04:02.325299 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 04:04:02 crc kubenswrapper[4801]: I1206 04:04:02.325875 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 04:04:04 crc kubenswrapper[4801]: I1206 04:04:04.353912 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 04:04:04 crc kubenswrapper[4801]: I1206 04:04:04.354329 4801 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 04:04:04 crc kubenswrapper[4801]: I1206 04:04:04.365466 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 04:04:04 crc kubenswrapper[4801]: I1206 04:04:04.374294 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 04:04:07 crc kubenswrapper[4801]: I1206 04:04:07.971102 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55868df668-jxh4g" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Dec 06 04:04:08 crc kubenswrapper[4801]: I1206 04:04:08.118135 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d85575696-vjhxr" podUID="3a358806-cf3d-4c1c-853a-ab310d0c7058" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.241:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.241:8443: connect: connection refused" Dec 06 04:04:11 crc kubenswrapper[4801]: I1206 04:04:11.170533 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:04:11 crc kubenswrapper[4801]: I1206 04:04:11.170923 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:04:11 crc kubenswrapper[4801]: I1206 04:04:11.170982 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:04:11 crc kubenswrapper[4801]: I1206 04:04:11.172109 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ce7d616b613d4c5b6c42a892b482868162d2c2cd72210b0f14d487fb878d9cbe"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:04:11 crc kubenswrapper[4801]: I1206 04:04:11.172217 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://ce7d616b613d4c5b6c42a892b482868162d2c2cd72210b0f14d487fb878d9cbe" gracePeriod=600 Dec 06 04:04:12 crc kubenswrapper[4801]: I1206 04:04:12.583076 4801 generic.go:334] "Generic (PLEG): container finished" podID="34ed2fd1-0b46-478f-b8f6-013c6744778d" containerID="6f8ebcfddcc43acd0ec256d80d7a437b11b70f76a371d9624ee8fb83eeefc6ca" exitCode=0 Dec 06 04:04:12 crc kubenswrapper[4801]: I1206 04:04:12.583141 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mk49l" event={"ID":"34ed2fd1-0b46-478f-b8f6-013c6744778d","Type":"ContainerDied","Data":"6f8ebcfddcc43acd0ec256d80d7a437b11b70f76a371d9624ee8fb83eeefc6ca"} Dec 06 04:04:12 crc kubenswrapper[4801]: I1206 04:04:12.587508 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="ce7d616b613d4c5b6c42a892b482868162d2c2cd72210b0f14d487fb878d9cbe" exitCode=0 Dec 06 04:04:12 crc kubenswrapper[4801]: I1206 04:04:12.587563 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"ce7d616b613d4c5b6c42a892b482868162d2c2cd72210b0f14d487fb878d9cbe"} Dec 06 04:04:12 crc kubenswrapper[4801]: I1206 04:04:12.587603 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff"} Dec 06 04:04:12 crc kubenswrapper[4801]: I1206 04:04:12.587632 4801 scope.go:117] "RemoveContainer" containerID="209605d865f1d030fafcddda6e451c7de8fa8696998ed81552541a0f6bf4fc17" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.042468 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mk49l" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.175057 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-job-config-data\") pod \"34ed2fd1-0b46-478f-b8f6-013c6744778d\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.175262 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjtsf\" (UniqueName: \"kubernetes.io/projected/34ed2fd1-0b46-478f-b8f6-013c6744778d-kube-api-access-fjtsf\") pod \"34ed2fd1-0b46-478f-b8f6-013c6744778d\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.175340 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-config-data\") pod \"34ed2fd1-0b46-478f-b8f6-013c6744778d\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.175359 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-combined-ca-bundle\") pod \"34ed2fd1-0b46-478f-b8f6-013c6744778d\" (UID: \"34ed2fd1-0b46-478f-b8f6-013c6744778d\") " Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.182883 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34ed2fd1-0b46-478f-b8f6-013c6744778d-kube-api-access-fjtsf" (OuterVolumeSpecName: "kube-api-access-fjtsf") pod "34ed2fd1-0b46-478f-b8f6-013c6744778d" (UID: "34ed2fd1-0b46-478f-b8f6-013c6744778d"). InnerVolumeSpecName "kube-api-access-fjtsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.183553 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "34ed2fd1-0b46-478f-b8f6-013c6744778d" (UID: "34ed2fd1-0b46-478f-b8f6-013c6744778d"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.186372 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-config-data" (OuterVolumeSpecName: "config-data") pod "34ed2fd1-0b46-478f-b8f6-013c6744778d" (UID: "34ed2fd1-0b46-478f-b8f6-013c6744778d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.216870 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34ed2fd1-0b46-478f-b8f6-013c6744778d" (UID: "34ed2fd1-0b46-478f-b8f6-013c6744778d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.277618 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjtsf\" (UniqueName: \"kubernetes.io/projected/34ed2fd1-0b46-478f-b8f6-013c6744778d-kube-api-access-fjtsf\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.277650 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.277660 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.277668 4801 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/34ed2fd1-0b46-478f-b8f6-013c6744778d-job-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.607434 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-mk49l" event={"ID":"34ed2fd1-0b46-478f-b8f6-013c6744778d","Type":"ContainerDied","Data":"e6af239b4cb020515b3e597e068688b6ceef86077ab54cff58528ac4cab82ffb"} Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.607483 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6af239b4cb020515b3e597e068688b6ceef86077ab54cff58528ac4cab82ffb" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.607483 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-mk49l" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.913728 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:14 crc kubenswrapper[4801]: E1206 04:04:14.914733 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34ed2fd1-0b46-478f-b8f6-013c6744778d" containerName="manila-db-sync" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.914779 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="34ed2fd1-0b46-478f-b8f6-013c6744778d" containerName="manila-db-sync" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.915114 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="34ed2fd1-0b46-478f-b8f6-013c6744778d" containerName="manila-db-sync" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.916816 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.918747 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.919186 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.919457 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.920421 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-t728x" Dec 06 04:04:14 crc kubenswrapper[4801]: I1206 04:04:14.930485 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.002563 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch8ng\" (UniqueName: \"kubernetes.io/projected/390bb08f-e246-447f-be8a-341528764d6f-kube-api-access-ch8ng\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.002974 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.003153 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-scripts\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.003279 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.003444 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.003590 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/390bb08f-e246-447f-be8a-341528764d6f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.043131 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.044828 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.048997 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.082743 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106338 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106395 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-scripts\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106416 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106430 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106462 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106495 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106517 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106542 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-ceph\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106559 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106577 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/390bb08f-e246-447f-be8a-341528764d6f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106612 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch8ng\" (UniqueName: \"kubernetes.io/projected/390bb08f-e246-447f-be8a-341528764d6f-kube-api-access-ch8ng\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106630 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhj9\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-kube-api-access-zrhj9\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106675 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-scripts\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.106711 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.108626 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/390bb08f-e246-447f-be8a-341528764d6f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.113882 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.114418 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-scripts\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.132801 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.152016 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.152901 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch8ng\" (UniqueName: \"kubernetes.io/projected/390bb08f-e246-447f-be8a-341528764d6f-kube-api-access-ch8ng\") pod \"manila-scheduler-0\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.195890 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-rxnbr"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.197817 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.209535 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.209792 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.209892 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-scripts\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210012 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210091 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210172 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210247 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210346 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-config\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210432 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210508 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-ceph\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210584 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210660 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjn2c\" (UniqueName: \"kubernetes.io/projected/8653b927-16f7-4400-8965-2ebdd408c0ca-kube-api-access-mjn2c\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210745 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210943 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.211456 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.210867 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrhj9\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-kube-api-access-zrhj9\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.223024 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.223783 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-scripts\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.224116 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.225041 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.225853 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-ceph\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.240174 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-rxnbr"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.258108 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrhj9\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-kube-api-access-zrhj9\") pod \"manila-share-share1-0\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.307999 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.323578 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.323737 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-config\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.323840 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjn2c\" (UniqueName: \"kubernetes.io/projected/8653b927-16f7-4400-8965-2ebdd408c0ca-kube-api-access-mjn2c\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.323913 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.324004 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.324061 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.330749 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-config\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.332930 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.343378 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.344430 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.348253 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.349648 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.350343 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8653b927-16f7-4400-8965-2ebdd408c0ca-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.355545 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.356514 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjn2c\" (UniqueName: \"kubernetes.io/projected/8653b927-16f7-4400-8965-2ebdd408c0ca-kube-api-access-mjn2c\") pod \"dnsmasq-dns-76b5fdb995-rxnbr\" (UID: \"8653b927-16f7-4400-8965-2ebdd408c0ca\") " pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.394943 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.410237 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.424977 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.425146 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4385ec90-302f-4bdf-aa67-67aa0a2f115f-logs\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.425182 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzk7\" (UniqueName: \"kubernetes.io/projected/4385ec90-302f-4bdf-aa67-67aa0a2f115f-kube-api-access-zzzk7\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.425212 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-scripts\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.425241 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data-custom\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.425264 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.425305 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4385ec90-302f-4bdf-aa67-67aa0a2f115f-etc-machine-id\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533411 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4385ec90-302f-4bdf-aa67-67aa0a2f115f-logs\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533454 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzzk7\" (UniqueName: \"kubernetes.io/projected/4385ec90-302f-4bdf-aa67-67aa0a2f115f-kube-api-access-zzzk7\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533491 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-scripts\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533514 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data-custom\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533531 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533575 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4385ec90-302f-4bdf-aa67-67aa0a2f115f-etc-machine-id\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.533625 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.535438 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4385ec90-302f-4bdf-aa67-67aa0a2f115f-logs\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.535887 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4385ec90-302f-4bdf-aa67-67aa0a2f115f-etc-machine-id\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.547675 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.553708 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data-custom\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.555935 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-scripts\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.556028 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.556411 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzzk7\" (UniqueName: \"kubernetes.io/projected/4385ec90-302f-4bdf-aa67-67aa0a2f115f-kube-api-access-zzzk7\") pod \"manila-api-0\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.629431 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.714803 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 04:04:15 crc kubenswrapper[4801]: I1206 04:04:15.911740 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.237927 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.271487 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-rxnbr"] Dec 06 04:04:16 crc kubenswrapper[4801]: W1206 04:04:16.296103 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5a70a17_2398_41d3_adee_33271686d5ac.slice/crio-fffc47e28c353495863765ed95eb87282b8bae956d5715cd28dfd3fd418a70e2 WatchSource:0}: Error finding container fffc47e28c353495863765ed95eb87282b8bae956d5715cd28dfd3fd418a70e2: Status 404 returned error can't find the container with id fffc47e28c353495863765ed95eb87282b8bae956d5715cd28dfd3fd418a70e2 Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.512591 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:16 crc kubenswrapper[4801]: W1206 04:04:16.581259 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4385ec90_302f_4bdf_aa67_67aa0a2f115f.slice/crio-0f5255d48ff1e363ae90aff1a83ec788e02e0f7c183b61945a85b1dd9d2660b4 WatchSource:0}: Error finding container 0f5255d48ff1e363ae90aff1a83ec788e02e0f7c183b61945a85b1dd9d2660b4: Status 404 returned error can't find the container with id 0f5255d48ff1e363ae90aff1a83ec788e02e0f7c183b61945a85b1dd9d2660b4 Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.635348 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"390bb08f-e246-447f-be8a-341528764d6f","Type":"ContainerStarted","Data":"86d6b551777fb8c96004851901dfd6211695815d0a42d334faeb5318cce6d213"} Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.638374 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" event={"ID":"8653b927-16f7-4400-8965-2ebdd408c0ca","Type":"ContainerStarted","Data":"6526df71f45af422030e86a738278d85bcca1132c032fdeee52daeaaf13bf5ed"} Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.640853 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4385ec90-302f-4bdf-aa67-67aa0a2f115f","Type":"ContainerStarted","Data":"0f5255d48ff1e363ae90aff1a83ec788e02e0f7c183b61945a85b1dd9d2660b4"} Dec 06 04:04:16 crc kubenswrapper[4801]: I1206 04:04:16.643045 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b5a70a17-2398-41d3-adee-33271686d5ac","Type":"ContainerStarted","Data":"fffc47e28c353495863765ed95eb87282b8bae956d5715cd28dfd3fd418a70e2"} Dec 06 04:04:17 crc kubenswrapper[4801]: I1206 04:04:17.664581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4385ec90-302f-4bdf-aa67-67aa0a2f115f","Type":"ContainerStarted","Data":"facb31c8cf3867da3d9e3d326ccca761ad21e9368839c0f522e21d870bac47e0"} Dec 06 04:04:17 crc kubenswrapper[4801]: I1206 04:04:17.667948 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"390bb08f-e246-447f-be8a-341528764d6f","Type":"ContainerStarted","Data":"e8d9864afae8eb6a99aca81de23867e22e2e4cd2d3b97b814e39c193575e8f36"} Dec 06 04:04:17 crc kubenswrapper[4801]: I1206 04:04:17.671024 4801 generic.go:334] "Generic (PLEG): container finished" podID="8653b927-16f7-4400-8965-2ebdd408c0ca" containerID="ab47137f40c120a257ece790731a7c701b67568d2eaefce48bc5cf8cc60e363a" exitCode=0 Dec 06 04:04:17 crc kubenswrapper[4801]: I1206 04:04:17.671059 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" event={"ID":"8653b927-16f7-4400-8965-2ebdd408c0ca","Type":"ContainerDied","Data":"ab47137f40c120a257ece790731a7c701b67568d2eaefce48bc5cf8cc60e363a"} Dec 06 04:04:17 crc kubenswrapper[4801]: I1206 04:04:17.822624 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.683601 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"390bb08f-e246-447f-be8a-341528764d6f","Type":"ContainerStarted","Data":"de39b1860d0120716815f05e8047c80cfe86bcfe8f6b818967afbf0d832acd07"} Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.690023 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" event={"ID":"8653b927-16f7-4400-8965-2ebdd408c0ca","Type":"ContainerStarted","Data":"d890a21c0dcc3131e3adf603f0be8fdbb6200467bde9843417c6cc98ae0be17e"} Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.690927 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.694333 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4385ec90-302f-4bdf-aa67-67aa0a2f115f","Type":"ContainerStarted","Data":"bbaff62e027821d9c307078b4b08d148719277676ddb6a16b9c516aaddf12978"} Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.694486 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api-log" containerID="cri-o://facb31c8cf3867da3d9e3d326ccca761ad21e9368839c0f522e21d870bac47e0" gracePeriod=30 Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.694873 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.694913 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api" containerID="cri-o://bbaff62e027821d9c307078b4b08d148719277676ddb6a16b9c516aaddf12978" gracePeriod=30 Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.703213 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.990788611 podStartE2EDuration="4.703198369s" podCreationTimestamp="2025-12-06 04:04:14 +0000 UTC" firstStartedPulling="2025-12-06 04:04:15.935561249 +0000 UTC m=+3509.058168821" lastFinishedPulling="2025-12-06 04:04:16.647971007 +0000 UTC m=+3509.770578579" observedRunningTime="2025-12-06 04:04:18.700832595 +0000 UTC m=+3511.823440167" watchObservedRunningTime="2025-12-06 04:04:18.703198369 +0000 UTC m=+3511.825805941" Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.732895 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" podStartSLOduration=3.732877998 podStartE2EDuration="3.732877998s" podCreationTimestamp="2025-12-06 04:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:04:18.730287768 +0000 UTC m=+3511.852895330" watchObservedRunningTime="2025-12-06 04:04:18.732877998 +0000 UTC m=+3511.855485570" Dec 06 04:04:18 crc kubenswrapper[4801]: I1206 04:04:18.755898 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.755875878 podStartE2EDuration="3.755875878s" podCreationTimestamp="2025-12-06 04:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:04:18.755040026 +0000 UTC m=+3511.877647598" watchObservedRunningTime="2025-12-06 04:04:18.755875878 +0000 UTC m=+3511.878483440" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.717722 4801 generic.go:334] "Generic (PLEG): container finished" podID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerID="bbaff62e027821d9c307078b4b08d148719277676ddb6a16b9c516aaddf12978" exitCode=0 Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.718257 4801 generic.go:334] "Generic (PLEG): container finished" podID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerID="facb31c8cf3867da3d9e3d326ccca761ad21e9368839c0f522e21d870bac47e0" exitCode=143 Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.719909 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4385ec90-302f-4bdf-aa67-67aa0a2f115f","Type":"ContainerDied","Data":"bbaff62e027821d9c307078b4b08d148719277676ddb6a16b9c516aaddf12978"} Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.719991 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4385ec90-302f-4bdf-aa67-67aa0a2f115f","Type":"ContainerDied","Data":"facb31c8cf3867da3d9e3d326ccca761ad21e9368839c0f522e21d870bac47e0"} Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.773466 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.879786 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-scripts\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.879947 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4385ec90-302f-4bdf-aa67-67aa0a2f115f-etc-machine-id\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.879992 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4385ec90-302f-4bdf-aa67-67aa0a2f115f-logs\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.880010 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.880082 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-combined-ca-bundle\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.880086 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4385ec90-302f-4bdf-aa67-67aa0a2f115f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.880126 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data-custom\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.880162 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzzk7\" (UniqueName: \"kubernetes.io/projected/4385ec90-302f-4bdf-aa67-67aa0a2f115f-kube-api-access-zzzk7\") pod \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\" (UID: \"4385ec90-302f-4bdf-aa67-67aa0a2f115f\") " Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.880546 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4385ec90-302f-4bdf-aa67-67aa0a2f115f-logs" (OuterVolumeSpecName: "logs") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.881053 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4385ec90-302f-4bdf-aa67-67aa0a2f115f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.881074 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4385ec90-302f-4bdf-aa67-67aa0a2f115f-logs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.888501 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-scripts" (OuterVolumeSpecName: "scripts") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.888603 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4385ec90-302f-4bdf-aa67-67aa0a2f115f-kube-api-access-zzzk7" (OuterVolumeSpecName: "kube-api-access-zzzk7") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "kube-api-access-zzzk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.889267 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.927969 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.983559 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.983604 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.983615 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.983625 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzzk7\" (UniqueName: \"kubernetes.io/projected/4385ec90-302f-4bdf-aa67-67aa0a2f115f-kube-api-access-zzzk7\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:19 crc kubenswrapper[4801]: I1206 04:04:19.998185 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data" (OuterVolumeSpecName: "config-data") pod "4385ec90-302f-4bdf-aa67-67aa0a2f115f" (UID: "4385ec90-302f-4bdf-aa67-67aa0a2f115f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.086265 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4385ec90-302f-4bdf-aa67-67aa0a2f115f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.586928 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.632393 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.735524 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.735865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"4385ec90-302f-4bdf-aa67-67aa0a2f115f","Type":"ContainerDied","Data":"0f5255d48ff1e363ae90aff1a83ec788e02e0f7c183b61945a85b1dd9d2660b4"} Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.735934 4801 scope.go:117] "RemoveContainer" containerID="bbaff62e027821d9c307078b4b08d148719277676ddb6a16b9c516aaddf12978" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.799811 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.828094 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.828356 4801 scope.go:117] "RemoveContainer" containerID="facb31c8cf3867da3d9e3d326ccca761ad21e9368839c0f522e21d870bac47e0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.838813 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:20 crc kubenswrapper[4801]: E1206 04:04:20.839338 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.839360 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api" Dec 06 04:04:20 crc kubenswrapper[4801]: E1206 04:04:20.839429 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api-log" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.839437 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api-log" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.839654 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api-log" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.839680 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" containerName="manila-api" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.847425 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.850340 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.850654 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.850876 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.861188 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.910855 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-public-tls-certs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.910916 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.910944 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-internal-tls-certs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.911031 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07982175-3bb7-4bfa-b4a7-49c3eff288ac-logs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.911089 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-scripts\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.911112 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07982175-3bb7-4bfa-b4a7-49c3eff288ac-etc-machine-id\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.911138 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-config-data\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.911166 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-config-data-custom\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:20 crc kubenswrapper[4801]: I1206 04:04:20.911208 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j67kf\" (UniqueName: \"kubernetes.io/projected/07982175-3bb7-4bfa-b4a7-49c3eff288ac-kube-api-access-j67kf\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012109 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-internal-tls-certs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012267 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07982175-3bb7-4bfa-b4a7-49c3eff288ac-logs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012303 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-scripts\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012336 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07982175-3bb7-4bfa-b4a7-49c3eff288ac-etc-machine-id\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012360 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-config-data\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012406 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-config-data-custom\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012467 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j67kf\" (UniqueName: \"kubernetes.io/projected/07982175-3bb7-4bfa-b4a7-49c3eff288ac-kube-api-access-j67kf\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012509 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-public-tls-certs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012543 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.012604 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07982175-3bb7-4bfa-b4a7-49c3eff288ac-etc-machine-id\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.013130 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07982175-3bb7-4bfa-b4a7-49c3eff288ac-logs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.018377 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-internal-tls-certs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.018837 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-config-data-custom\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.020011 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-scripts\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.022492 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-config-data\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.023270 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-public-tls-certs\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.024836 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07982175-3bb7-4bfa-b4a7-49c3eff288ac-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.036646 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j67kf\" (UniqueName: \"kubernetes.io/projected/07982175-3bb7-4bfa-b4a7-49c3eff288ac-kube-api-access-j67kf\") pod \"manila-api-0\" (UID: \"07982175-3bb7-4bfa-b4a7-49c3eff288ac\") " pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.159998 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.160494 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="sg-core" containerID="cri-o://68f83de2b7d363c229a1d3c8b75453b858797dd1200cb4c85726cb27bdf9ee2c" gracePeriod=30 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.160706 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="proxy-httpd" containerID="cri-o://446fdbf6ad8292e235c8dd770f603bcdfa738ec5e930ae0612f87f657082690e" gracePeriod=30 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.160631 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-notification-agent" containerID="cri-o://968ec772f9477e403e8669f666fce906226c348e9801a39f3606f9f85b917192" gracePeriod=30 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.160354 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-central-agent" containerID="cri-o://688910860aa4bcbf08d703f730a7b469a5edcaaa96b410af76a33a7723132eeb" gracePeriod=30 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.168273 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.251103 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4385ec90-302f-4bdf-aa67-67aa0a2f115f" path="/var/lib/kubelet/pods/4385ec90-302f-4bdf-aa67-67aa0a2f115f/volumes" Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.767919 4801 generic.go:334] "Generic (PLEG): container finished" podID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerID="446fdbf6ad8292e235c8dd770f603bcdfa738ec5e930ae0612f87f657082690e" exitCode=0 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.768528 4801 generic.go:334] "Generic (PLEG): container finished" podID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerID="68f83de2b7d363c229a1d3c8b75453b858797dd1200cb4c85726cb27bdf9ee2c" exitCode=2 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.768541 4801 generic.go:334] "Generic (PLEG): container finished" podID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerID="688910860aa4bcbf08d703f730a7b469a5edcaaa96b410af76a33a7723132eeb" exitCode=0 Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.768238 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerDied","Data":"446fdbf6ad8292e235c8dd770f603bcdfa738ec5e930ae0612f87f657082690e"} Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.768583 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerDied","Data":"68f83de2b7d363c229a1d3c8b75453b858797dd1200cb4c85726cb27bdf9ee2c"} Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.768597 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerDied","Data":"688910860aa4bcbf08d703f730a7b469a5edcaaa96b410af76a33a7723132eeb"} Dec 06 04:04:21 crc kubenswrapper[4801]: I1206 04:04:21.834639 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 06 04:04:22 crc kubenswrapper[4801]: I1206 04:04:22.797373 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07982175-3bb7-4bfa-b4a7-49c3eff288ac","Type":"ContainerStarted","Data":"ecd8115005d8923c5b350d18b236b28259f60752d49affe612624b8630c6a144"} Dec 06 04:04:22 crc kubenswrapper[4801]: I1206 04:04:22.797741 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:04:22 crc kubenswrapper[4801]: I1206 04:04:22.797767 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07982175-3bb7-4bfa-b4a7-49c3eff288ac","Type":"ContainerStarted","Data":"6a4658aeee403e3e3dc8ee082c3e15db4332e1ac9d0f8551ee7941183f9d083b"} Dec 06 04:04:22 crc kubenswrapper[4801]: I1206 04:04:22.939929 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7d85575696-vjhxr" Dec 06 04:04:23 crc kubenswrapper[4801]: I1206 04:04:23.007234 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55868df668-jxh4g"] Dec 06 04:04:23 crc kubenswrapper[4801]: I1206 04:04:23.814595 4801 generic.go:334] "Generic (PLEG): container finished" podID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerID="968ec772f9477e403e8669f666fce906226c348e9801a39f3606f9f85b917192" exitCode=0 Dec 06 04:04:23 crc kubenswrapper[4801]: I1206 04:04:23.814868 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerDied","Data":"968ec772f9477e403e8669f666fce906226c348e9801a39f3606f9f85b917192"} Dec 06 04:04:23 crc kubenswrapper[4801]: I1206 04:04:23.815683 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55868df668-jxh4g" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon-log" containerID="cri-o://e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76" gracePeriod=30 Dec 06 04:04:23 crc kubenswrapper[4801]: I1206 04:04:23.815870 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55868df668-jxh4g" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" containerID="cri-o://9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0" gracePeriod=30 Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.829042 4801 generic.go:334] "Generic (PLEG): container finished" podID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerID="c5b1d76e4db28094ba85b1a137d25016c64cf6fb09cfd586759bf152daa8335a" exitCode=137 Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.829318 4801 generic.go:334] "Generic (PLEG): container finished" podID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerID="96c6148d942ded3d99feb1ccef7632ba67942118f83d6ea56823e6b3eb842a38" exitCode=137 Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.829194 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b4c777b9-r2w76" event={"ID":"0cc73ff2-805f-4c1d-8758-125c82a15fdc","Type":"ContainerDied","Data":"c5b1d76e4db28094ba85b1a137d25016c64cf6fb09cfd586759bf152daa8335a"} Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.829394 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b4c777b9-r2w76" event={"ID":"0cc73ff2-805f-4c1d-8758-125c82a15fdc","Type":"ContainerDied","Data":"96c6148d942ded3d99feb1ccef7632ba67942118f83d6ea56823e6b3eb842a38"} Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.831605 4801 generic.go:334] "Generic (PLEG): container finished" podID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerID="c95111234899a63892976c545a97542d960aeb3dbd614c1ff1e9ce8ac12285f3" exitCode=137 Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.831633 4801 generic.go:334] "Generic (PLEG): container finished" podID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerID="8e7c42c883f27aa4940a83a9f23c010d5b2f9fb4bb016d84d9637a11b2a3d309" exitCode=137 Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.831649 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786cb8dcb9-wz2q4" event={"ID":"7d6c2bd0-84b0-42b6-bb5a-2f568981b344","Type":"ContainerDied","Data":"c95111234899a63892976c545a97542d960aeb3dbd614c1ff1e9ce8ac12285f3"} Dec 06 04:04:24 crc kubenswrapper[4801]: I1206 04:04:24.831671 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786cb8dcb9-wz2q4" event={"ID":"7d6c2bd0-84b0-42b6-bb5a-2f568981b344","Type":"ContainerDied","Data":"8e7c42c883f27aa4940a83a9f23c010d5b2f9fb4bb016d84d9637a11b2a3d309"} Dec 06 04:04:25 crc kubenswrapper[4801]: I1206 04:04:25.309946 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 06 04:04:25 crc kubenswrapper[4801]: I1206 04:04:25.633336 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-rxnbr" Dec 06 04:04:25 crc kubenswrapper[4801]: I1206 04:04:25.723693 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-zg8p2"] Dec 06 04:04:25 crc kubenswrapper[4801]: I1206 04:04:25.724125 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerName="dnsmasq-dns" containerID="cri-o://4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879" gracePeriod=10 Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.095112 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.181021 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-ceilometer-tls-certs\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.182215 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.182289 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-log-httpd\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.182390 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-scripts\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.182614 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-config-data\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.182867 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6qq\" (UniqueName: \"kubernetes.io/projected/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-kube-api-access-wt6qq\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.183508 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-sg-core-conf-yaml\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.183643 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-combined-ca-bundle\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.183732 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-run-httpd\") pod \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\" (UID: \"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.186106 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.186932 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.193549 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-kube-api-access-wt6qq" (OuterVolumeSpecName: "kube-api-access-wt6qq") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "kube-api-access-wt6qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.196903 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-scripts" (OuterVolumeSpecName: "scripts") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.267947 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.294170 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6qq\" (UniqueName: \"kubernetes.io/projected/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-kube-api-access-wt6qq\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.294213 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.294222 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.294234 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.335718 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.336976 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.381396 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.411270 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.412295 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.433985 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-config-data" (OuterVolumeSpecName: "config-data") pod "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" (UID: "0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.449151 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.530654 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-logs\") pod \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.530901 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-scripts\") pod \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.531078 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-config-data\") pod \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.531186 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b9pt\" (UniqueName: \"kubernetes.io/projected/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-kube-api-access-7b9pt\") pod \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.531274 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-horizon-secret-key\") pod \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\" (UID: \"7d6c2bd0-84b0-42b6-bb5a-2f568981b344\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.532247 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.532624 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-logs" (OuterVolumeSpecName: "logs") pod "7d6c2bd0-84b0-42b6-bb5a-2f568981b344" (UID: "7d6c2bd0-84b0-42b6-bb5a-2f568981b344"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.539177 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-kube-api-access-7b9pt" (OuterVolumeSpecName: "kube-api-access-7b9pt") pod "7d6c2bd0-84b0-42b6-bb5a-2f568981b344" (UID: "7d6c2bd0-84b0-42b6-bb5a-2f568981b344"). InnerVolumeSpecName "kube-api-access-7b9pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.546170 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7d6c2bd0-84b0-42b6-bb5a-2f568981b344" (UID: "7d6c2bd0-84b0-42b6-bb5a-2f568981b344"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.586852 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-config-data" (OuterVolumeSpecName: "config-data") pod "7d6c2bd0-84b0-42b6-bb5a-2f568981b344" (UID: "7d6c2bd0-84b0-42b6-bb5a-2f568981b344"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.588328 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-scripts" (OuterVolumeSpecName: "scripts") pod "7d6c2bd0-84b0-42b6-bb5a-2f568981b344" (UID: "7d6c2bd0-84b0-42b6-bb5a-2f568981b344"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.635386 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvlk8\" (UniqueName: \"kubernetes.io/projected/0cc73ff2-805f-4c1d-8758-125c82a15fdc-kube-api-access-vvlk8\") pod \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.635462 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-config-data\") pod \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.635617 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-scripts\") pod \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.635710 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc73ff2-805f-4c1d-8758-125c82a15fdc-logs\") pod \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.635794 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cc73ff2-805f-4c1d-8758-125c82a15fdc-horizon-secret-key\") pod \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\" (UID: \"0cc73ff2-805f-4c1d-8758-125c82a15fdc\") " Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.636739 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.636791 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.636804 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b9pt\" (UniqueName: \"kubernetes.io/projected/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-kube-api-access-7b9pt\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.636815 4801 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.636825 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d6c2bd0-84b0-42b6-bb5a-2f568981b344-logs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.638370 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cc73ff2-805f-4c1d-8758-125c82a15fdc-logs" (OuterVolumeSpecName: "logs") pod "0cc73ff2-805f-4c1d-8758-125c82a15fdc" (UID: "0cc73ff2-805f-4c1d-8758-125c82a15fdc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.643204 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cc73ff2-805f-4c1d-8758-125c82a15fdc-kube-api-access-vvlk8" (OuterVolumeSpecName: "kube-api-access-vvlk8") pod "0cc73ff2-805f-4c1d-8758-125c82a15fdc" (UID: "0cc73ff2-805f-4c1d-8758-125c82a15fdc"). InnerVolumeSpecName "kube-api-access-vvlk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.658616 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cc73ff2-805f-4c1d-8758-125c82a15fdc-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0cc73ff2-805f-4c1d-8758-125c82a15fdc" (UID: "0cc73ff2-805f-4c1d-8758-125c82a15fdc"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.664083 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-scripts" (OuterVolumeSpecName: "scripts") pod "0cc73ff2-805f-4c1d-8758-125c82a15fdc" (UID: "0cc73ff2-805f-4c1d-8758-125c82a15fdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.687921 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-config-data" (OuterVolumeSpecName: "config-data") pod "0cc73ff2-805f-4c1d-8758-125c82a15fdc" (UID: "0cc73ff2-805f-4c1d-8758-125c82a15fdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.739216 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cc73ff2-805f-4c1d-8758-125c82a15fdc-logs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.739262 4801 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0cc73ff2-805f-4c1d-8758-125c82a15fdc-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.739278 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvlk8\" (UniqueName: \"kubernetes.io/projected/0cc73ff2-805f-4c1d-8758-125c82a15fdc-kube-api-access-vvlk8\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.739288 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.739302 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0cc73ff2-805f-4c1d-8758-125c82a15fdc-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.850743 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.889578 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86b4c777b9-r2w76" event={"ID":"0cc73ff2-805f-4c1d-8758-125c82a15fdc","Type":"ContainerDied","Data":"6f274fddc5507077d17547c6f55362ac2127c711d9ac9e4b496958ede5b305fc"} Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.889663 4801 scope.go:117] "RemoveContainer" containerID="c5b1d76e4db28094ba85b1a137d25016c64cf6fb09cfd586759bf152daa8335a" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.889892 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86b4c777b9-r2w76" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.896403 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b5a70a17-2398-41d3-adee-33271686d5ac","Type":"ContainerStarted","Data":"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293"} Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.908746 4801 generic.go:334] "Generic (PLEG): container finished" podID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerID="4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879" exitCode=0 Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.909022 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.909905 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" event={"ID":"080aa27a-3c47-4c6f-bced-06f2ebab0d84","Type":"ContainerDied","Data":"4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879"} Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.910133 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-zg8p2" event={"ID":"080aa27a-3c47-4c6f-bced-06f2ebab0d84","Type":"ContainerDied","Data":"fa1f7734b63f4311b9f1c3cf4686e3aa2e9f7d42ff9675d1a32c280d39d2498b"} Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.921609 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57","Type":"ContainerDied","Data":"f26bda35dd6da283bb12ccf214c165e4eade1447e24fe91889bd341eac314ae0"} Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.921821 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.961583 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-786cb8dcb9-wz2q4" event={"ID":"7d6c2bd0-84b0-42b6-bb5a-2f568981b344","Type":"ContainerDied","Data":"4292cbca9be316e519d2e43935af6f1938941a0d0bf61f5a98a94e4be445027f"} Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.961779 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86b4c777b9-r2w76"] Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.961872 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-786cb8dcb9-wz2q4" Dec 06 04:04:26 crc kubenswrapper[4801]: I1206 04:04:26.985028 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86b4c777b9-r2w76"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.001610 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.016310 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.024414 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-786cb8dcb9-wz2q4"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.037233 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-786cb8dcb9-wz2q4"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.048018 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-sb\") pod \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.048115 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-config\") pod \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.048136 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-nb\") pod \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.052017 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-openstack-edpm-ipam\") pod \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.052204 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-dns-svc\") pod \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.052241 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmrzv\" (UniqueName: \"kubernetes.io/projected/080aa27a-3c47-4c6f-bced-06f2ebab0d84-kube-api-access-zmrzv\") pod \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\" (UID: \"080aa27a-3c47-4c6f-bced-06f2ebab0d84\") " Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.075402 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.096439 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080aa27a-3c47-4c6f-bced-06f2ebab0d84-kube-api-access-zmrzv" (OuterVolumeSpecName: "kube-api-access-zmrzv") pod "080aa27a-3c47-4c6f-bced-06f2ebab0d84" (UID: "080aa27a-3c47-4c6f-bced-06f2ebab0d84"). InnerVolumeSpecName "kube-api-access-zmrzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.121739 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.121795 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.121861 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon-log" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.121869 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon-log" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.121912 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon-log" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.121919 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon-log" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.121939 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="proxy-httpd" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.121946 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="proxy-httpd" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.121962 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.121969 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.122000 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-notification-agent" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.122008 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-notification-agent" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.122034 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="sg-core" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.122041 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="sg-core" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.122067 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerName="init" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.122074 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerName="init" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.122113 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerName="dnsmasq-dns" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.122122 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerName="dnsmasq-dns" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.122144 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-central-agent" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.122162 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-central-agent" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127244 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-central-agent" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127282 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" containerName="dnsmasq-dns" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127301 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon-log" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127327 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="sg-core" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127359 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon-log" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127374 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="ceilometer-notification-agent" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127388 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" containerName="horizon" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127413 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" containerName="horizon" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.127432 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" containerName="proxy-httpd" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.148314 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "080aa27a-3c47-4c6f-bced-06f2ebab0d84" (UID: "080aa27a-3c47-4c6f-bced-06f2ebab0d84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.149062 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "080aa27a-3c47-4c6f-bced-06f2ebab0d84" (UID: "080aa27a-3c47-4c6f-bced-06f2ebab0d84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.149892 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.150051 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.161374 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.161936 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.163060 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.171164 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.171212 4801 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.171230 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmrzv\" (UniqueName: \"kubernetes.io/projected/080aa27a-3c47-4c6f-bced-06f2ebab0d84-kube-api-access-zmrzv\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.177967 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "080aa27a-3c47-4c6f-bced-06f2ebab0d84" (UID: "080aa27a-3c47-4c6f-bced-06f2ebab0d84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.180490 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "080aa27a-3c47-4c6f-bced-06f2ebab0d84" (UID: "080aa27a-3c47-4c6f-bced-06f2ebab0d84"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.227515 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cc73ff2-805f-4c1d-8758-125c82a15fdc" path="/var/lib/kubelet/pods/0cc73ff2-805f-4c1d-8758-125c82a15fdc/volumes" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.229087 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57" path="/var/lib/kubelet/pods/0dbf36b6-a9d1-4b92-bc20-4cc10aa11f57/volumes" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.230638 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d6c2bd0-84b0-42b6-bb5a-2f568981b344" path="/var/lib/kubelet/pods/7d6c2bd0-84b0-42b6-bb5a-2f568981b344/volumes" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.238619 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-config" (OuterVolumeSpecName: "config") pod "080aa27a-3c47-4c6f-bced-06f2ebab0d84" (UID: "080aa27a-3c47-4c6f-bced-06f2ebab0d84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.242884 4801 scope.go:117] "RemoveContainer" containerID="96c6148d942ded3d99feb1ccef7632ba67942118f83d6ea56823e6b3eb842a38" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278624 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278710 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-log-httpd\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278790 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278823 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-scripts\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278880 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxd99\" (UniqueName: \"kubernetes.io/projected/735775ca-9360-4059-84d8-6830e93b807a-kube-api-access-dxd99\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278904 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-run-httpd\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.278987 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-config-data\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.279035 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.279122 4801 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-config\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.279138 4801 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.279151 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/080aa27a-3c47-4c6f-bced-06f2ebab0d84-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.335010 4801 scope.go:117] "RemoveContainer" containerID="4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.369437 4801 scope.go:117] "RemoveContainer" containerID="a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.381655 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.381800 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.381829 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-log-httpd\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.381895 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.381940 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-scripts\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.381991 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxd99\" (UniqueName: \"kubernetes.io/projected/735775ca-9360-4059-84d8-6830e93b807a-kube-api-access-dxd99\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.382033 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-run-httpd\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.382094 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-config-data\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.384315 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-run-httpd\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.384464 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-log-httpd\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.388246 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.388400 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.388699 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-scripts\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.389094 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-config-data\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.391385 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.400986 4801 scope.go:117] "RemoveContainer" containerID="4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.402312 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879\": container with ID starting with 4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879 not found: ID does not exist" containerID="4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.402375 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879"} err="failed to get container status \"4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879\": rpc error: code = NotFound desc = could not find container \"4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879\": container with ID starting with 4139018954d3e58ce29a9c45d42dc18c754db9e17864b8be723cbd0f628e8879 not found: ID does not exist" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.402414 4801 scope.go:117] "RemoveContainer" containerID="a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5" Dec 06 04:04:27 crc kubenswrapper[4801]: E1206 04:04:27.402862 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5\": container with ID starting with a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5 not found: ID does not exist" containerID="a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.402933 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5"} err="failed to get container status \"a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5\": rpc error: code = NotFound desc = could not find container \"a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5\": container with ID starting with a4a98c3851a21e2fbabd5364d4ec2aca69b556bca30f8b5a9db841c12ecd5ff5 not found: ID does not exist" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.403011 4801 scope.go:117] "RemoveContainer" containerID="446fdbf6ad8292e235c8dd770f603bcdfa738ec5e930ae0612f87f657082690e" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.403196 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxd99\" (UniqueName: \"kubernetes.io/projected/735775ca-9360-4059-84d8-6830e93b807a-kube-api-access-dxd99\") pod \"ceilometer-0\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.422194 4801 scope.go:117] "RemoveContainer" containerID="68f83de2b7d363c229a1d3c8b75453b858797dd1200cb4c85726cb27bdf9ee2c" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.450965 4801 scope.go:117] "RemoveContainer" containerID="968ec772f9477e403e8669f666fce906226c348e9801a39f3606f9f85b917192" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.473874 4801 scope.go:117] "RemoveContainer" containerID="688910860aa4bcbf08d703f730a7b469a5edcaaa96b410af76a33a7723132eeb" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.496991 4801 scope.go:117] "RemoveContainer" containerID="c95111234899a63892976c545a97542d960aeb3dbd614c1ff1e9ce8ac12285f3" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.533658 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.554509 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-zg8p2"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.561574 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-zg8p2"] Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.717105 4801 scope.go:117] "RemoveContainer" containerID="8e7c42c883f27aa4940a83a9f23c010d5b2f9fb4bb016d84d9637a11b2a3d309" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.969769 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55868df668-jxh4g" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Dec 06 04:04:27 crc kubenswrapper[4801]: I1206 04:04:27.998402 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b5a70a17-2398-41d3-adee-33271686d5ac","Type":"ContainerStarted","Data":"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147"} Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.008886 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07982175-3bb7-4bfa-b4a7-49c3eff288ac","Type":"ContainerStarted","Data":"8be8e90ce8e1ab6cf180d0085b20cea94bb2803ab50efcffa795b3b722560fed"} Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.010284 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.054310 4801 generic.go:334] "Generic (PLEG): container finished" podID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerID="9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0" exitCode=0 Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.054357 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55868df668-jxh4g" event={"ID":"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1","Type":"ContainerDied","Data":"9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0"} Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.054628 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.471748782 podStartE2EDuration="13.054583989s" podCreationTimestamp="2025-12-06 04:04:15 +0000 UTC" firstStartedPulling="2025-12-06 04:04:16.309035094 +0000 UTC m=+3509.431642666" lastFinishedPulling="2025-12-06 04:04:25.891870301 +0000 UTC m=+3519.014477873" observedRunningTime="2025-12-06 04:04:28.036305896 +0000 UTC m=+3521.158913478" watchObservedRunningTime="2025-12-06 04:04:28.054583989 +0000 UTC m=+3521.177191561" Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.068391 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=8.06836244 podStartE2EDuration="8.06836244s" podCreationTimestamp="2025-12-06 04:04:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:04:28.060608132 +0000 UTC m=+3521.183215704" watchObservedRunningTime="2025-12-06 04:04:28.06836244 +0000 UTC m=+3521.190970012" Dec 06 04:04:28 crc kubenswrapper[4801]: I1206 04:04:28.355299 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:28 crc kubenswrapper[4801]: W1206 04:04:28.367003 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod735775ca_9360_4059_84d8_6830e93b807a.slice/crio-bd1f993f0d9a3182cb4d7320a8c41a046843f79881bcd63205fb7cbe5ce5df52 WatchSource:0}: Error finding container bd1f993f0d9a3182cb4d7320a8c41a046843f79881bcd63205fb7cbe5ce5df52: Status 404 returned error can't find the container with id bd1f993f0d9a3182cb4d7320a8c41a046843f79881bcd63205fb7cbe5ce5df52 Dec 06 04:04:29 crc kubenswrapper[4801]: I1206 04:04:29.082079 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerStarted","Data":"bd1f993f0d9a3182cb4d7320a8c41a046843f79881bcd63205fb7cbe5ce5df52"} Dec 06 04:04:29 crc kubenswrapper[4801]: I1206 04:04:29.223504 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080aa27a-3c47-4c6f-bced-06f2ebab0d84" path="/var/lib/kubelet/pods/080aa27a-3c47-4c6f-bced-06f2ebab0d84/volumes" Dec 06 04:04:30 crc kubenswrapper[4801]: I1206 04:04:30.096380 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerStarted","Data":"fb72cfa8043f08f4cc71fc159b998ff9d633f0d06d79686062a4326d178d99a1"} Dec 06 04:04:30 crc kubenswrapper[4801]: I1206 04:04:30.097115 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerStarted","Data":"3fe4815b5cb768b6771e8339b33925931509f0ea3d3767a39c3c6200ff60b051"} Dec 06 04:04:30 crc kubenswrapper[4801]: I1206 04:04:30.248709 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:30 crc kubenswrapper[4801]: I1206 04:04:30.599292 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 04:04:31 crc kubenswrapper[4801]: I1206 04:04:31.107026 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerStarted","Data":"7163d7c804177ae8381f89cc3370cd42ab9c13d7f225d2766babf409a9b79681"} Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.133540 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerStarted","Data":"39b1d4260b17b4434106b3254497801ae9523698c02aa936d164bff2ba196bd9"} Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.134437 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-central-agent" containerID="cri-o://3fe4815b5cb768b6771e8339b33925931509f0ea3d3767a39c3c6200ff60b051" gracePeriod=30 Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.134735 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.135000 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="proxy-httpd" containerID="cri-o://39b1d4260b17b4434106b3254497801ae9523698c02aa936d164bff2ba196bd9" gracePeriod=30 Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.135044 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="sg-core" containerID="cri-o://7163d7c804177ae8381f89cc3370cd42ab9c13d7f225d2766babf409a9b79681" gracePeriod=30 Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.135073 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-notification-agent" containerID="cri-o://fb72cfa8043f08f4cc71fc159b998ff9d633f0d06d79686062a4326d178d99a1" gracePeriod=30 Dec 06 04:04:33 crc kubenswrapper[4801]: I1206 04:04:33.164271 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.587216539 podStartE2EDuration="7.164252589s" podCreationTimestamp="2025-12-06 04:04:26 +0000 UTC" firstStartedPulling="2025-12-06 04:04:28.370170624 +0000 UTC m=+3521.492778196" lastFinishedPulling="2025-12-06 04:04:31.947206664 +0000 UTC m=+3525.069814246" observedRunningTime="2025-12-06 04:04:33.160325773 +0000 UTC m=+3526.282933345" watchObservedRunningTime="2025-12-06 04:04:33.164252589 +0000 UTC m=+3526.286860161" Dec 06 04:04:34 crc kubenswrapper[4801]: I1206 04:04:34.147036 4801 generic.go:334] "Generic (PLEG): container finished" podID="735775ca-9360-4059-84d8-6830e93b807a" containerID="39b1d4260b17b4434106b3254497801ae9523698c02aa936d164bff2ba196bd9" exitCode=0 Dec 06 04:04:34 crc kubenswrapper[4801]: I1206 04:04:34.147364 4801 generic.go:334] "Generic (PLEG): container finished" podID="735775ca-9360-4059-84d8-6830e93b807a" containerID="7163d7c804177ae8381f89cc3370cd42ab9c13d7f225d2766babf409a9b79681" exitCode=2 Dec 06 04:04:34 crc kubenswrapper[4801]: I1206 04:04:34.147375 4801 generic.go:334] "Generic (PLEG): container finished" podID="735775ca-9360-4059-84d8-6830e93b807a" containerID="fb72cfa8043f08f4cc71fc159b998ff9d633f0d06d79686062a4326d178d99a1" exitCode=0 Dec 06 04:04:34 crc kubenswrapper[4801]: I1206 04:04:34.147137 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerDied","Data":"39b1d4260b17b4434106b3254497801ae9523698c02aa936d164bff2ba196bd9"} Dec 06 04:04:34 crc kubenswrapper[4801]: I1206 04:04:34.147428 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerDied","Data":"7163d7c804177ae8381f89cc3370cd42ab9c13d7f225d2766babf409a9b79681"} Dec 06 04:04:34 crc kubenswrapper[4801]: I1206 04:04:34.147451 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerDied","Data":"fb72cfa8043f08f4cc71fc159b998ff9d633f0d06d79686062a4326d178d99a1"} Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.160312 4801 generic.go:334] "Generic (PLEG): container finished" podID="735775ca-9360-4059-84d8-6830e93b807a" containerID="3fe4815b5cb768b6771e8339b33925931509f0ea3d3767a39c3c6200ff60b051" exitCode=0 Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.160414 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerDied","Data":"3fe4815b5cb768b6771e8339b33925931509f0ea3d3767a39c3c6200ff60b051"} Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.239213 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.380463 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-log-httpd\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.380900 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.381108 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-ceilometer-tls-certs\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.381236 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-sg-core-conf-yaml\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.381470 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-run-httpd\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.381639 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.381806 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxd99\" (UniqueName: \"kubernetes.io/projected/735775ca-9360-4059-84d8-6830e93b807a-kube-api-access-dxd99\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.381943 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-config-data\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.382421 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-combined-ca-bundle\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.382555 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-scripts\") pod \"735775ca-9360-4059-84d8-6830e93b807a\" (UID: \"735775ca-9360-4059-84d8-6830e93b807a\") " Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.383413 4801 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.383539 4801 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/735775ca-9360-4059-84d8-6830e93b807a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.390415 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735775ca-9360-4059-84d8-6830e93b807a-kube-api-access-dxd99" (OuterVolumeSpecName: "kube-api-access-dxd99") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "kube-api-access-dxd99". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.398550 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.408078 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-scripts" (OuterVolumeSpecName: "scripts") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.411500 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.453777 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.486236 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.486273 4801 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.486489 4801 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.486504 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxd99\" (UniqueName: \"kubernetes.io/projected/735775ca-9360-4059-84d8-6830e93b807a-kube-api-access-dxd99\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.497603 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.520219 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-config-data" (OuterVolumeSpecName: "config-data") pod "735775ca-9360-4059-84d8-6830e93b807a" (UID: "735775ca-9360-4059-84d8-6830e93b807a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.589971 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:35 crc kubenswrapper[4801]: I1206 04:04:35.590039 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/735775ca-9360-4059-84d8-6830e93b807a-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.191530 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"735775ca-9360-4059-84d8-6830e93b807a","Type":"ContainerDied","Data":"bd1f993f0d9a3182cb4d7320a8c41a046843f79881bcd63205fb7cbe5ce5df52"} Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.191962 4801 scope.go:117] "RemoveContainer" containerID="39b1d4260b17b4434106b3254497801ae9523698c02aa936d164bff2ba196bd9" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.192140 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.220804 4801 scope.go:117] "RemoveContainer" containerID="7163d7c804177ae8381f89cc3370cd42ab9c13d7f225d2766babf409a9b79681" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.251670 4801 scope.go:117] "RemoveContainer" containerID="fb72cfa8043f08f4cc71fc159b998ff9d633f0d06d79686062a4326d178d99a1" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.267125 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.280115 4801 scope.go:117] "RemoveContainer" containerID="3fe4815b5cb768b6771e8339b33925931509f0ea3d3767a39c3c6200ff60b051" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.291929 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.317250 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:36 crc kubenswrapper[4801]: E1206 04:04:36.317870 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="sg-core" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.317891 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="sg-core" Dec 06 04:04:36 crc kubenswrapper[4801]: E1206 04:04:36.317901 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="proxy-httpd" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.317908 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="proxy-httpd" Dec 06 04:04:36 crc kubenswrapper[4801]: E1206 04:04:36.317920 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-central-agent" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.317927 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-central-agent" Dec 06 04:04:36 crc kubenswrapper[4801]: E1206 04:04:36.317938 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-notification-agent" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.317944 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-notification-agent" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.318131 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-central-agent" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.318155 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="proxy-httpd" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.318169 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="sg-core" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.318182 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="735775ca-9360-4059-84d8-6830e93b807a" containerName="ceilometer-notification-agent" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.320366 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.324587 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.324597 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.324944 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.329813 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517568 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517654 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-config-data\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517689 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmvw\" (UniqueName: \"kubernetes.io/projected/e69f07f2-fed0-4999-9167-1d3c6d17fccd-kube-api-access-bnmvw\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517739 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517809 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517842 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69f07f2-fed0-4999-9167-1d3c6d17fccd-run-httpd\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517893 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69f07f2-fed0-4999-9167-1d3c6d17fccd-log-httpd\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.517962 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-scripts\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.619998 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-scripts\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620090 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620132 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-config-data\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620190 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmvw\" (UniqueName: \"kubernetes.io/projected/e69f07f2-fed0-4999-9167-1d3c6d17fccd-kube-api-access-bnmvw\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620238 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620292 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620320 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69f07f2-fed0-4999-9167-1d3c6d17fccd-run-httpd\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620372 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69f07f2-fed0-4999-9167-1d3c6d17fccd-log-httpd\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.620914 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69f07f2-fed0-4999-9167-1d3c6d17fccd-log-httpd\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.622554 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69f07f2-fed0-4999-9167-1d3c6d17fccd-run-httpd\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.627520 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.627732 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.636184 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-scripts\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.636424 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.638793 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69f07f2-fed0-4999-9167-1d3c6d17fccd-config-data\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.645463 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmvw\" (UniqueName: \"kubernetes.io/projected/e69f07f2-fed0-4999-9167-1d3c6d17fccd-kube-api-access-bnmvw\") pod \"ceilometer-0\" (UID: \"e69f07f2-fed0-4999-9167-1d3c6d17fccd\") " pod="openstack/ceilometer-0" Dec 06 04:04:36 crc kubenswrapper[4801]: I1206 04:04:36.942275 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 04:04:37 crc kubenswrapper[4801]: I1206 04:04:37.095233 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 06 04:04:37 crc kubenswrapper[4801]: I1206 04:04:37.169237 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:37 crc kubenswrapper[4801]: I1206 04:04:37.201793 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="manila-scheduler" containerID="cri-o://e8d9864afae8eb6a99aca81de23867e22e2e4cd2d3b97b814e39c193575e8f36" gracePeriod=30 Dec 06 04:04:37 crc kubenswrapper[4801]: I1206 04:04:37.202193 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="probe" containerID="cri-o://de39b1860d0120716815f05e8047c80cfe86bcfe8f6b818967afbf0d832acd07" gracePeriod=30 Dec 06 04:04:37 crc kubenswrapper[4801]: I1206 04:04:37.222253 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735775ca-9360-4059-84d8-6830e93b807a" path="/var/lib/kubelet/pods/735775ca-9360-4059-84d8-6830e93b807a/volumes" Dec 06 04:04:37 crc kubenswrapper[4801]: I1206 04:04:37.969537 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55868df668-jxh4g" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.083537 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 04:04:38 crc kubenswrapper[4801]: W1206 04:04:38.109290 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode69f07f2_fed0_4999_9167_1d3c6d17fccd.slice/crio-4697c1c594cd2fec02e8340199d9541d6d23160131c6b469f2af439ee1f44d50 WatchSource:0}: Error finding container 4697c1c594cd2fec02e8340199d9541d6d23160131c6b469f2af439ee1f44d50: Status 404 returned error can't find the container with id 4697c1c594cd2fec02e8340199d9541d6d23160131c6b469f2af439ee1f44d50 Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.209845 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69f07f2-fed0-4999-9167-1d3c6d17fccd","Type":"ContainerStarted","Data":"4697c1c594cd2fec02e8340199d9541d6d23160131c6b469f2af439ee1f44d50"} Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.212708 4801 generic.go:334] "Generic (PLEG): container finished" podID="390bb08f-e246-447f-be8a-341528764d6f" containerID="de39b1860d0120716815f05e8047c80cfe86bcfe8f6b818967afbf0d832acd07" exitCode=0 Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.212732 4801 generic.go:334] "Generic (PLEG): container finished" podID="390bb08f-e246-447f-be8a-341528764d6f" containerID="e8d9864afae8eb6a99aca81de23867e22e2e4cd2d3b97b814e39c193575e8f36" exitCode=0 Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.212745 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"390bb08f-e246-447f-be8a-341528764d6f","Type":"ContainerDied","Data":"de39b1860d0120716815f05e8047c80cfe86bcfe8f6b818967afbf0d832acd07"} Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.212801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"390bb08f-e246-447f-be8a-341528764d6f","Type":"ContainerDied","Data":"e8d9864afae8eb6a99aca81de23867e22e2e4cd2d3b97b814e39c193575e8f36"} Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.521967 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.660803 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-scripts\") pod \"390bb08f-e246-447f-be8a-341528764d6f\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.661724 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data\") pod \"390bb08f-e246-447f-be8a-341528764d6f\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.661835 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch8ng\" (UniqueName: \"kubernetes.io/projected/390bb08f-e246-447f-be8a-341528764d6f-kube-api-access-ch8ng\") pod \"390bb08f-e246-447f-be8a-341528764d6f\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.661905 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/390bb08f-e246-447f-be8a-341528764d6f-etc-machine-id\") pod \"390bb08f-e246-447f-be8a-341528764d6f\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.661974 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data-custom\") pod \"390bb08f-e246-447f-be8a-341528764d6f\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.662026 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-combined-ca-bundle\") pod \"390bb08f-e246-447f-be8a-341528764d6f\" (UID: \"390bb08f-e246-447f-be8a-341528764d6f\") " Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.662118 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/390bb08f-e246-447f-be8a-341528764d6f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "390bb08f-e246-447f-be8a-341528764d6f" (UID: "390bb08f-e246-447f-be8a-341528764d6f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.662554 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/390bb08f-e246-447f-be8a-341528764d6f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.667462 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-scripts" (OuterVolumeSpecName: "scripts") pod "390bb08f-e246-447f-be8a-341528764d6f" (UID: "390bb08f-e246-447f-be8a-341528764d6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.668549 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/390bb08f-e246-447f-be8a-341528764d6f-kube-api-access-ch8ng" (OuterVolumeSpecName: "kube-api-access-ch8ng") pod "390bb08f-e246-447f-be8a-341528764d6f" (UID: "390bb08f-e246-447f-be8a-341528764d6f"). InnerVolumeSpecName "kube-api-access-ch8ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.669109 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "390bb08f-e246-447f-be8a-341528764d6f" (UID: "390bb08f-e246-447f-be8a-341528764d6f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.719096 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "390bb08f-e246-447f-be8a-341528764d6f" (UID: "390bb08f-e246-447f-be8a-341528764d6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.766414 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch8ng\" (UniqueName: \"kubernetes.io/projected/390bb08f-e246-447f-be8a-341528764d6f-kube-api-access-ch8ng\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.767414 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.767498 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.767553 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.780454 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data" (OuterVolumeSpecName: "config-data") pod "390bb08f-e246-447f-be8a-341528764d6f" (UID: "390bb08f-e246-447f-be8a-341528764d6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:38 crc kubenswrapper[4801]: I1206 04:04:38.869066 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/390bb08f-e246-447f-be8a-341528764d6f-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.226015 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.232091 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"390bb08f-e246-447f-be8a-341528764d6f","Type":"ContainerDied","Data":"86d6b551777fb8c96004851901dfd6211695815d0a42d334faeb5318cce6d213"} Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.232148 4801 scope.go:117] "RemoveContainer" containerID="de39b1860d0120716815f05e8047c80cfe86bcfe8f6b818967afbf0d832acd07" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.331356 4801 scope.go:117] "RemoveContainer" containerID="e8d9864afae8eb6a99aca81de23867e22e2e4cd2d3b97b814e39c193575e8f36" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.359938 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.375644 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.383166 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:39 crc kubenswrapper[4801]: E1206 04:04:39.383706 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="manila-scheduler" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.383725 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="manila-scheduler" Dec 06 04:04:39 crc kubenswrapper[4801]: E1206 04:04:39.383800 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="probe" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.383809 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="probe" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.384060 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="probe" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.384083 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="390bb08f-e246-447f-be8a-341528764d6f" containerName="manila-scheduler" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.385383 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.387513 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.394492 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.483345 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cknv\" (UniqueName: \"kubernetes.io/projected/606f274f-6ae7-4b11-b684-e95831283ee4-kube-api-access-6cknv\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.483391 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-scripts\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.483436 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-config-data\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.483741 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/606f274f-6ae7-4b11-b684-e95831283ee4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.483969 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.484081 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.585992 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-config-data\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.586117 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/606f274f-6ae7-4b11-b684-e95831283ee4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.586202 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.586237 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.586244 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/606f274f-6ae7-4b11-b684-e95831283ee4-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.586286 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cknv\" (UniqueName: \"kubernetes.io/projected/606f274f-6ae7-4b11-b684-e95831283ee4-kube-api-access-6cknv\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.586304 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-scripts\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.591087 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-scripts\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.591622 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.592287 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-config-data\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.600441 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/606f274f-6ae7-4b11-b684-e95831283ee4-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.604362 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cknv\" (UniqueName: \"kubernetes.io/projected/606f274f-6ae7-4b11-b684-e95831283ee4-kube-api-access-6cknv\") pod \"manila-scheduler-0\" (UID: \"606f274f-6ae7-4b11-b684-e95831283ee4\") " pod="openstack/manila-scheduler-0" Dec 06 04:04:39 crc kubenswrapper[4801]: I1206 04:04:39.712326 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 06 04:04:40 crc kubenswrapper[4801]: I1206 04:04:40.168044 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 06 04:04:40 crc kubenswrapper[4801]: W1206 04:04:40.171322 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod606f274f_6ae7_4b11_b684_e95831283ee4.slice/crio-d2674f151bf1c4780a990d0f25646bbf108a5e345dd3e6eca19c5e5139f94e88 WatchSource:0}: Error finding container d2674f151bf1c4780a990d0f25646bbf108a5e345dd3e6eca19c5e5139f94e88: Status 404 returned error can't find the container with id d2674f151bf1c4780a990d0f25646bbf108a5e345dd3e6eca19c5e5139f94e88 Dec 06 04:04:40 crc kubenswrapper[4801]: I1206 04:04:40.239018 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69f07f2-fed0-4999-9167-1d3c6d17fccd","Type":"ContainerStarted","Data":"d6a7ac298106f2a9fcae4970903c0fe3d0e07049f40d15e2be0d7fbfcdddfd45"} Dec 06 04:04:40 crc kubenswrapper[4801]: I1206 04:04:40.239076 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69f07f2-fed0-4999-9167-1d3c6d17fccd","Type":"ContainerStarted","Data":"abdca42e2dcfffc132de8c681597436a141fac03694846768c310eca030e3e9d"} Dec 06 04:04:40 crc kubenswrapper[4801]: I1206 04:04:40.241215 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"606f274f-6ae7-4b11-b684-e95831283ee4","Type":"ContainerStarted","Data":"d2674f151bf1c4780a990d0f25646bbf108a5e345dd3e6eca19c5e5139f94e88"} Dec 06 04:04:41 crc kubenswrapper[4801]: I1206 04:04:41.227153 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="390bb08f-e246-447f-be8a-341528764d6f" path="/var/lib/kubelet/pods/390bb08f-e246-447f-be8a-341528764d6f/volumes" Dec 06 04:04:41 crc kubenswrapper[4801]: I1206 04:04:41.256879 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"606f274f-6ae7-4b11-b684-e95831283ee4","Type":"ContainerStarted","Data":"5babae6c700f0c833b5bc31a28f0a8493b59d0c22476b4e7131b6bb616982e9a"} Dec 06 04:04:41 crc kubenswrapper[4801]: I1206 04:04:41.256926 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"606f274f-6ae7-4b11-b684-e95831283ee4","Type":"ContainerStarted","Data":"61bf68dca60cd213b642b236fce6e465f57e478a6f344c8e12a38d334001699e"} Dec 06 04:04:41 crc kubenswrapper[4801]: I1206 04:04:41.261090 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69f07f2-fed0-4999-9167-1d3c6d17fccd","Type":"ContainerStarted","Data":"2d1a380a110c7570ae31499fda4c0b28d69ff4fe2d94eec2dec30fc57ea226e7"} Dec 06 04:04:41 crc kubenswrapper[4801]: I1206 04:04:41.283522 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.283495908 podStartE2EDuration="2.283495908s" podCreationTimestamp="2025-12-06 04:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:04:41.274300379 +0000 UTC m=+3534.396907961" watchObservedRunningTime="2025-12-06 04:04:41.283495908 +0000 UTC m=+3534.406103480" Dec 06 04:04:42 crc kubenswrapper[4801]: I1206 04:04:42.619197 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Dec 06 04:04:43 crc kubenswrapper[4801]: I1206 04:04:43.295730 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69f07f2-fed0-4999-9167-1d3c6d17fccd","Type":"ContainerStarted","Data":"2ba982d493089fbea5ff6856a8d555b5bd5560d8501babb783b422f8f634d320"} Dec 06 04:04:43 crc kubenswrapper[4801]: I1206 04:04:43.296172 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 04:04:43 crc kubenswrapper[4801]: I1206 04:04:43.324622 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.820613786 podStartE2EDuration="7.324607198s" podCreationTimestamp="2025-12-06 04:04:36 +0000 UTC" firstStartedPulling="2025-12-06 04:04:38.121350857 +0000 UTC m=+3531.243958429" lastFinishedPulling="2025-12-06 04:04:41.625344269 +0000 UTC m=+3534.747951841" observedRunningTime="2025-12-06 04:04:43.317489787 +0000 UTC m=+3536.440097359" watchObservedRunningTime="2025-12-06 04:04:43.324607198 +0000 UTC m=+3536.447214770" Dec 06 04:04:46 crc kubenswrapper[4801]: I1206 04:04:46.965772 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 06 04:04:47 crc kubenswrapper[4801]: I1206 04:04:47.033674 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:47 crc kubenswrapper[4801]: I1206 04:04:47.341241 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="manila-share" containerID="cri-o://264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293" gracePeriod=30 Dec 06 04:04:47 crc kubenswrapper[4801]: I1206 04:04:47.341344 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="probe" containerID="cri-o://d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147" gracePeriod=30 Dec 06 04:04:47 crc kubenswrapper[4801]: I1206 04:04:47.969971 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55868df668-jxh4g" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.240:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.240:8443: connect: connection refused" Dec 06 04:04:47 crc kubenswrapper[4801]: I1206 04:04:47.970340 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.271395 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352675 4801 generic.go:334] "Generic (PLEG): container finished" podID="b5a70a17-2398-41d3-adee-33271686d5ac" containerID="d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147" exitCode=0 Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352706 4801 generic.go:334] "Generic (PLEG): container finished" podID="b5a70a17-2398-41d3-adee-33271686d5ac" containerID="264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293" exitCode=1 Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352726 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b5a70a17-2398-41d3-adee-33271686d5ac","Type":"ContainerDied","Data":"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147"} Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352766 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b5a70a17-2398-41d3-adee-33271686d5ac","Type":"ContainerDied","Data":"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293"} Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"b5a70a17-2398-41d3-adee-33271686d5ac","Type":"ContainerDied","Data":"fffc47e28c353495863765ed95eb87282b8bae956d5715cd28dfd3fd418a70e2"} Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352746 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.352792 4801 scope.go:117] "RemoveContainer" containerID="d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.378927 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-combined-ca-bundle\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379003 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-scripts\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379118 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-ceph\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379183 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379204 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data-custom\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379222 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-var-lib-manila\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379291 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-etc-machine-id\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379402 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrhj9\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-kube-api-access-zrhj9\") pod \"b5a70a17-2398-41d3-adee-33271686d5ac\" (UID: \"b5a70a17-2398-41d3-adee-33271686d5ac\") " Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379817 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.379864 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.380368 4801 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-var-lib-manila\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.380395 4801 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5a70a17-2398-41d3-adee-33271686d5ac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.385455 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-scripts" (OuterVolumeSpecName: "scripts") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.385708 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-kube-api-access-zrhj9" (OuterVolumeSpecName: "kube-api-access-zrhj9") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "kube-api-access-zrhj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.386093 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.386543 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-ceph" (OuterVolumeSpecName: "ceph") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.390815 4801 scope.go:117] "RemoveContainer" containerID="264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.462767 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.464566 4801 scope.go:117] "RemoveContainer" containerID="d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147" Dec 06 04:04:48 crc kubenswrapper[4801]: E1206 04:04:48.465010 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147\": container with ID starting with d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147 not found: ID does not exist" containerID="d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465042 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147"} err="failed to get container status \"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147\": rpc error: code = NotFound desc = could not find container \"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147\": container with ID starting with d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147 not found: ID does not exist" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465070 4801 scope.go:117] "RemoveContainer" containerID="264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293" Dec 06 04:04:48 crc kubenswrapper[4801]: E1206 04:04:48.465277 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293\": container with ID starting with 264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293 not found: ID does not exist" containerID="264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465315 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293"} err="failed to get container status \"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293\": rpc error: code = NotFound desc = could not find container \"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293\": container with ID starting with 264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293 not found: ID does not exist" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465327 4801 scope.go:117] "RemoveContainer" containerID="d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465516 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147"} err="failed to get container status \"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147\": rpc error: code = NotFound desc = could not find container \"d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147\": container with ID starting with d318728f3bfabf6f26d5177f604f65854ba7f9efb4c42d6ab9543f9874f72147 not found: ID does not exist" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465532 4801 scope.go:117] "RemoveContainer" containerID="264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.465710 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293"} err="failed to get container status \"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293\": rpc error: code = NotFound desc = could not find container \"264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293\": container with ID starting with 264d60c3db230887b02da42bf0621001a532821c596c070c4f10a2677e79e293 not found: ID does not exist" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.482283 4801 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-ceph\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.482313 4801 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.482324 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrhj9\" (UniqueName: \"kubernetes.io/projected/b5a70a17-2398-41d3-adee-33271686d5ac-kube-api-access-zrhj9\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.482333 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.482343 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.497446 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data" (OuterVolumeSpecName: "config-data") pod "b5a70a17-2398-41d3-adee-33271686d5ac" (UID: "b5a70a17-2398-41d3-adee-33271686d5ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.584792 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5a70a17-2398-41d3-adee-33271686d5ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.735644 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.749091 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.767063 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:48 crc kubenswrapper[4801]: E1206 04:04:48.767575 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="manila-share" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.767596 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="manila-share" Dec 06 04:04:48 crc kubenswrapper[4801]: E1206 04:04:48.767646 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="probe" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.767656 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="probe" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.767975 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="probe" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.768025 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" containerName="manila-share" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.769433 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.771996 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.778362 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.788536 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.788596 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.788901 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-ceph\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.788973 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt78f\" (UniqueName: \"kubernetes.io/projected/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-kube-api-access-vt78f\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.789122 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-config-data\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.789299 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-scripts\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.789377 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.789499 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891417 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-ceph\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891732 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt78f\" (UniqueName: \"kubernetes.io/projected/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-kube-api-access-vt78f\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891762 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-config-data\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891808 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-scripts\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891828 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891859 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891892 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.891912 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.892113 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.892172 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.897417 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-ceph\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.897426 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-scripts\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.897881 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.899552 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-config-data\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.903248 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:48 crc kubenswrapper[4801]: I1206 04:04:48.910310 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt78f\" (UniqueName: \"kubernetes.io/projected/05cd9a4f-2b17-46aa-85c4-99ca0e3f8642-kube-api-access-vt78f\") pod \"manila-share-share1-0\" (UID: \"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642\") " pod="openstack/manila-share-share1-0" Dec 06 04:04:49 crc kubenswrapper[4801]: I1206 04:04:49.119956 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 06 04:04:49 crc kubenswrapper[4801]: I1206 04:04:49.233077 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a70a17-2398-41d3-adee-33271686d5ac" path="/var/lib/kubelet/pods/b5a70a17-2398-41d3-adee-33271686d5ac/volumes" Dec 06 04:04:49 crc kubenswrapper[4801]: I1206 04:04:49.646478 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 06 04:04:49 crc kubenswrapper[4801]: W1206 04:04:49.650858 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05cd9a4f_2b17_46aa_85c4_99ca0e3f8642.slice/crio-10162440a1edb36e1955f0e7d08f45021d6cb0ae1752ebb09751b1f483628a70 WatchSource:0}: Error finding container 10162440a1edb36e1955f0e7d08f45021d6cb0ae1752ebb09751b1f483628a70: Status 404 returned error can't find the container with id 10162440a1edb36e1955f0e7d08f45021d6cb0ae1752ebb09751b1f483628a70 Dec 06 04:04:49 crc kubenswrapper[4801]: I1206 04:04:49.725785 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 06 04:04:50 crc kubenswrapper[4801]: I1206 04:04:50.379248 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642","Type":"ContainerStarted","Data":"47c94fc87bf0e88134d3c282c07542c7c7cbb45eae04c33599a94b6d767c3c7c"} Dec 06 04:04:50 crc kubenswrapper[4801]: I1206 04:04:50.379819 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642","Type":"ContainerStarted","Data":"10162440a1edb36e1955f0e7d08f45021d6cb0ae1752ebb09751b1f483628a70"} Dec 06 04:04:51 crc kubenswrapper[4801]: I1206 04:04:51.390468 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"05cd9a4f-2b17-46aa-85c4-99ca0e3f8642","Type":"ContainerStarted","Data":"8bf713c68a6153850abe5f74fdec70412aba7cd43b982a8a5ee5f68f8621c0d7"} Dec 06 04:04:51 crc kubenswrapper[4801]: I1206 04:04:51.417278 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.4172558 podStartE2EDuration="3.4172558s" podCreationTimestamp="2025-12-06 04:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:04:51.406289515 +0000 UTC m=+3544.528897097" watchObservedRunningTime="2025-12-06 04:04:51.4172558 +0000 UTC m=+3544.539863372" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.266046 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.312378 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-secret-key\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.312683 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-scripts\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.312796 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5crp\" (UniqueName: \"kubernetes.io/projected/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-kube-api-access-c5crp\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.313003 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-combined-ca-bundle\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.313147 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-tls-certs\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.313252 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-config-data\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.313439 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-logs\") pod \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\" (UID: \"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1\") " Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.314996 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-logs" (OuterVolumeSpecName: "logs") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.319799 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-kube-api-access-c5crp" (OuterVolumeSpecName: "kube-api-access-c5crp") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "kube-api-access-c5crp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.319939 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.343628 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-config-data" (OuterVolumeSpecName: "config-data") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.345560 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-scripts" (OuterVolumeSpecName: "scripts") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.373440 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.373804 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" (UID: "f19d88d7-ec86-4b5f-8c22-b19e3750a4b1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.417428 4801 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-logs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.418251 4801 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.418365 4801 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.418422 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5crp\" (UniqueName: \"kubernetes.io/projected/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-kube-api-access-c5crp\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.418526 4801 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.418582 4801 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.418649 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.430406 4801 generic.go:334] "Generic (PLEG): container finished" podID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerID="e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76" exitCode=137 Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.430480 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55868df668-jxh4g" event={"ID":"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1","Type":"ContainerDied","Data":"e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76"} Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.430564 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55868df668-jxh4g" event={"ID":"f19d88d7-ec86-4b5f-8c22-b19e3750a4b1","Type":"ContainerDied","Data":"cbc800a393a68271e862845105bcfc33daa90e44ac31100922e99592db905de8"} Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.430584 4801 scope.go:117] "RemoveContainer" containerID="9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.430788 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55868df668-jxh4g" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.464346 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55868df668-jxh4g"] Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.473264 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55868df668-jxh4g"] Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.581834 4801 scope.go:117] "RemoveContainer" containerID="e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.600588 4801 scope.go:117] "RemoveContainer" containerID="9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0" Dec 06 04:04:54 crc kubenswrapper[4801]: E1206 04:04:54.601116 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0\": container with ID starting with 9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0 not found: ID does not exist" containerID="9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.601155 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0"} err="failed to get container status \"9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0\": rpc error: code = NotFound desc = could not find container \"9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0\": container with ID starting with 9edbe114e4ac3d5e674a57708523728237c1f41ea1136b502e0f78fac85704e0 not found: ID does not exist" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.601183 4801 scope.go:117] "RemoveContainer" containerID="e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76" Dec 06 04:04:54 crc kubenswrapper[4801]: E1206 04:04:54.601502 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76\": container with ID starting with e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76 not found: ID does not exist" containerID="e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76" Dec 06 04:04:54 crc kubenswrapper[4801]: I1206 04:04:54.601545 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76"} err="failed to get container status \"e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76\": rpc error: code = NotFound desc = could not find container \"e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76\": container with ID starting with e1d32f4ebf22f7d9d26ba3eaa707bbcacd514e388ffeabe72269315a2b9f4d76 not found: ID does not exist" Dec 06 04:04:55 crc kubenswrapper[4801]: I1206 04:04:55.225494 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" path="/var/lib/kubelet/pods/f19d88d7-ec86-4b5f-8c22-b19e3750a4b1/volumes" Dec 06 04:04:59 crc kubenswrapper[4801]: I1206 04:04:59.120172 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 06 04:05:01 crc kubenswrapper[4801]: I1206 04:05:01.266808 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 06 04:05:07 crc kubenswrapper[4801]: I1206 04:05:07.062174 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 04:05:10 crc kubenswrapper[4801]: I1206 04:05:10.716276 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.579771 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 04:06:11 crc kubenswrapper[4801]: E1206 04:06:11.580821 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.580838 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" Dec 06 04:06:11 crc kubenswrapper[4801]: E1206 04:06:11.580863 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon-log" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.580871 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon-log" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.581138 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon-log" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.581163 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19d88d7-ec86-4b5f-8c22-b19e3750a4b1" containerName="horizon" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.581981 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.583815 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.584074 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hkth4" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.585584 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.585923 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.605439 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664266 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664463 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664593 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664632 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664689 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664778 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664815 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkvrs\" (UniqueName: \"kubernetes.io/projected/f38b08ba-582a-45d7-a085-ccfa93f1a805-kube-api-access-rkvrs\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664857 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-config-data\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.664964 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766337 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766428 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkvrs\" (UniqueName: \"kubernetes.io/projected/f38b08ba-582a-45d7-a085-ccfa93f1a805-kube-api-access-rkvrs\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766455 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-config-data\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766477 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766564 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766606 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766642 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766661 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.766687 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.767425 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.767907 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.769332 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-config-data\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.769602 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.770323 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.779631 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.782716 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.783412 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkvrs\" (UniqueName: \"kubernetes.io/projected/f38b08ba-582a-45d7-a085-ccfa93f1a805-kube-api-access-rkvrs\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.783963 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.798361 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"tempest-tests-tempest\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " pod="openstack/tempest-tests-tempest" Dec 06 04:06:11 crc kubenswrapper[4801]: I1206 04:06:11.912510 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 04:06:12 crc kubenswrapper[4801]: I1206 04:06:12.354361 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 04:06:13 crc kubenswrapper[4801]: I1206 04:06:13.375337 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f38b08ba-582a-45d7-a085-ccfa93f1a805","Type":"ContainerStarted","Data":"94307e56fb555f20cab31ece35652b57b2f5fe250db0a5e2f2c00168f0380bf4"} Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.571002 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w6t57"] Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.617007 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6t57"] Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.617110 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.752271 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rj6f\" (UniqueName: \"kubernetes.io/projected/02c1c2a8-8ef7-4937-9754-619641b82ba9-kube-api-access-2rj6f\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.752369 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-catalog-content\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.752403 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-utilities\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.855802 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rj6f\" (UniqueName: \"kubernetes.io/projected/02c1c2a8-8ef7-4937-9754-619641b82ba9-kube-api-access-2rj6f\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.855940 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-catalog-content\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.855989 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-utilities\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.857022 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-utilities\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.857642 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-catalog-content\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.891863 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rj6f\" (UniqueName: \"kubernetes.io/projected/02c1c2a8-8ef7-4937-9754-619641b82ba9-kube-api-access-2rj6f\") pod \"community-operators-w6t57\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:36 crc kubenswrapper[4801]: I1206 04:06:36.949347 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.174738 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q624v"] Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.202956 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q624v"] Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.203101 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.304230 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-catalog-content\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.304376 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-utilities\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.304417 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrg7m\" (UniqueName: \"kubernetes.io/projected/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-kube-api-access-xrg7m\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.406670 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-utilities\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.406777 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrg7m\" (UniqueName: \"kubernetes.io/projected/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-kube-api-access-xrg7m\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.407406 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-utilities\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.407578 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-catalog-content\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.407992 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-catalog-content\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.430295 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrg7m\" (UniqueName: \"kubernetes.io/projected/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-kube-api-access-xrg7m\") pod \"redhat-marketplace-q624v\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.525938 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.771242 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jg5wk"] Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.773399 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.799237 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jg5wk"] Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.917996 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgj7\" (UniqueName: \"kubernetes.io/projected/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-kube-api-access-vdgj7\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.918047 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-catalog-content\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:39 crc kubenswrapper[4801]: I1206 04:06:39.918208 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-utilities\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.020502 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgj7\" (UniqueName: \"kubernetes.io/projected/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-kube-api-access-vdgj7\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.020553 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-catalog-content\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.020603 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-utilities\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.021131 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-utilities\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.021486 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-catalog-content\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.041010 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgj7\" (UniqueName: \"kubernetes.io/projected/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-kube-api-access-vdgj7\") pod \"redhat-operators-jg5wk\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:40 crc kubenswrapper[4801]: I1206 04:06:40.116010 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:06:41 crc kubenswrapper[4801]: I1206 04:06:41.170281 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:06:41 crc kubenswrapper[4801]: I1206 04:06:41.171054 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:06:51 crc kubenswrapper[4801]: E1206 04:06:51.101988 4801 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 06 04:06:51 crc kubenswrapper[4801]: E1206 04:06:51.103218 4801 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkvrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(f38b08ba-582a-45d7-a085-ccfa93f1a805): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 04:06:51 crc kubenswrapper[4801]: E1206 04:06:51.105058 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="f38b08ba-582a-45d7-a085-ccfa93f1a805" Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.605948 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6t57"] Dec 06 04:06:51 crc kubenswrapper[4801]: W1206 04:06:51.685927 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c762bf5_78f4_4067_8f8e_3a4d9f04790b.slice/crio-788ddf91fd48d42c09dda54cb5bd7674b2adfef0c18100bd51ad1b44af0aafe6 WatchSource:0}: Error finding container 788ddf91fd48d42c09dda54cb5bd7674b2adfef0c18100bd51ad1b44af0aafe6: Status 404 returned error can't find the container with id 788ddf91fd48d42c09dda54cb5bd7674b2adfef0c18100bd51ad1b44af0aafe6 Dec 06 04:06:51 crc kubenswrapper[4801]: W1206 04:06:51.689357 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdeab9da_38e2_4c08_a4b8_36beee24b2d8.slice/crio-44d6ce60368e40b6994a05956d523c96afb839c78ba6245cae7bca16e734c4bc WatchSource:0}: Error finding container 44d6ce60368e40b6994a05956d523c96afb839c78ba6245cae7bca16e734c4bc: Status 404 returned error can't find the container with id 44d6ce60368e40b6994a05956d523c96afb839c78ba6245cae7bca16e734c4bc Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.692811 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q624v"] Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.702127 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jg5wk"] Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.854926 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerStarted","Data":"9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30"} Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.855175 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerStarted","Data":"44d6ce60368e40b6994a05956d523c96afb839c78ba6245cae7bca16e734c4bc"} Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.861682 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerStarted","Data":"92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0"} Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.861732 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerStarted","Data":"788ddf91fd48d42c09dda54cb5bd7674b2adfef0c18100bd51ad1b44af0aafe6"} Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.864204 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerStarted","Data":"b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240"} Dec 06 04:06:51 crc kubenswrapper[4801]: I1206 04:06:51.864246 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerStarted","Data":"e180bb484b1d0880084fff01bde318690c43e314daa95fcc770ce502352b3173"} Dec 06 04:06:51 crc kubenswrapper[4801]: E1206 04:06:51.865551 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="f38b08ba-582a-45d7-a085-ccfa93f1a805" Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.884416 4801 generic.go:334] "Generic (PLEG): container finished" podID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerID="9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30" exitCode=0 Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.884696 4801 generic.go:334] "Generic (PLEG): container finished" podID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerID="e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7" exitCode=0 Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.884497 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerDied","Data":"9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30"} Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.884769 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerDied","Data":"e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7"} Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.890281 4801 generic.go:334] "Generic (PLEG): container finished" podID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerID="92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0" exitCode=0 Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.890351 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerDied","Data":"92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0"} Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.891995 4801 generic.go:334] "Generic (PLEG): container finished" podID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerID="b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240" exitCode=0 Dec 06 04:06:52 crc kubenswrapper[4801]: I1206 04:06:52.892019 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerDied","Data":"b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240"} Dec 06 04:06:53 crc kubenswrapper[4801]: I1206 04:06:53.904847 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerStarted","Data":"37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979"} Dec 06 04:06:53 crc kubenswrapper[4801]: I1206 04:06:53.909279 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerStarted","Data":"9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26"} Dec 06 04:06:53 crc kubenswrapper[4801]: I1206 04:06:53.917060 4801 generic.go:334] "Generic (PLEG): container finished" podID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerID="0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418" exitCode=0 Dec 06 04:06:53 crc kubenswrapper[4801]: I1206 04:06:53.917109 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerDied","Data":"0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418"} Dec 06 04:06:53 crc kubenswrapper[4801]: I1206 04:06:53.934887 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q624v" podStartSLOduration=13.487901664 podStartE2EDuration="14.934861713s" podCreationTimestamp="2025-12-06 04:06:39 +0000 UTC" firstStartedPulling="2025-12-06 04:06:51.856767821 +0000 UTC m=+3664.979375393" lastFinishedPulling="2025-12-06 04:06:53.30372787 +0000 UTC m=+3666.426335442" observedRunningTime="2025-12-06 04:06:53.927214767 +0000 UTC m=+3667.049822339" watchObservedRunningTime="2025-12-06 04:06:53.934861713 +0000 UTC m=+3667.057469285" Dec 06 04:06:57 crc kubenswrapper[4801]: I1206 04:06:57.974223 4801 generic.go:334] "Generic (PLEG): container finished" podID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerID="9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26" exitCode=0 Dec 06 04:06:57 crc kubenswrapper[4801]: I1206 04:06:57.974305 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerDied","Data":"9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26"} Dec 06 04:06:59 crc kubenswrapper[4801]: I1206 04:06:59.526955 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:59 crc kubenswrapper[4801]: I1206 04:06:59.527242 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:06:59 crc kubenswrapper[4801]: I1206 04:06:59.593140 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:07:00 crc kubenswrapper[4801]: I1206 04:07:00.061896 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:07:00 crc kubenswrapper[4801]: I1206 04:07:00.820865 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q624v"] Dec 06 04:07:01 crc kubenswrapper[4801]: I1206 04:07:01.010655 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerStarted","Data":"c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc"} Dec 06 04:07:02 crc kubenswrapper[4801]: I1206 04:07:02.018934 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q624v" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="registry-server" containerID="cri-o://37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979" gracePeriod=2 Dec 06 04:07:02 crc kubenswrapper[4801]: I1206 04:07:02.041295 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w6t57" podStartSLOduration=17.405964357 podStartE2EDuration="26.041248996s" podCreationTimestamp="2025-12-06 04:06:36 +0000 UTC" firstStartedPulling="2025-12-06 04:06:51.867534719 +0000 UTC m=+3664.990142291" lastFinishedPulling="2025-12-06 04:07:00.502819348 +0000 UTC m=+3673.625426930" observedRunningTime="2025-12-06 04:07:02.037420792 +0000 UTC m=+3675.160028364" watchObservedRunningTime="2025-12-06 04:07:02.041248996 +0000 UTC m=+3675.163856568" Dec 06 04:07:02 crc kubenswrapper[4801]: I1206 04:07:02.950779 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.030955 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerStarted","Data":"60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9"} Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.036524 4801 generic.go:334] "Generic (PLEG): container finished" podID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerID="37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979" exitCode=0 Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.036565 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerDied","Data":"37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979"} Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.036814 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q624v" event={"ID":"fdeab9da-38e2-4c08-a4b8-36beee24b2d8","Type":"ContainerDied","Data":"44d6ce60368e40b6994a05956d523c96afb839c78ba6245cae7bca16e734c4bc"} Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.036603 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q624v" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.036874 4801 scope.go:117] "RemoveContainer" containerID="37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.063675 4801 scope.go:117] "RemoveContainer" containerID="e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.067348 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jg5wk" podStartSLOduration=14.335668188 podStartE2EDuration="24.067334082s" podCreationTimestamp="2025-12-06 04:06:39 +0000 UTC" firstStartedPulling="2025-12-06 04:06:52.893386801 +0000 UTC m=+3666.015994373" lastFinishedPulling="2025-12-06 04:07:02.625052695 +0000 UTC m=+3675.747660267" observedRunningTime="2025-12-06 04:07:03.050259574 +0000 UTC m=+3676.172867146" watchObservedRunningTime="2025-12-06 04:07:03.067334082 +0000 UTC m=+3676.189941674" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.089951 4801 scope.go:117] "RemoveContainer" containerID="9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.102470 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrg7m\" (UniqueName: \"kubernetes.io/projected/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-kube-api-access-xrg7m\") pod \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.102669 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-utilities\") pod \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.102741 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-catalog-content\") pod \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\" (UID: \"fdeab9da-38e2-4c08-a4b8-36beee24b2d8\") " Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.104476 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-utilities" (OuterVolumeSpecName: "utilities") pod "fdeab9da-38e2-4c08-a4b8-36beee24b2d8" (UID: "fdeab9da-38e2-4c08-a4b8-36beee24b2d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.108411 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-kube-api-access-xrg7m" (OuterVolumeSpecName: "kube-api-access-xrg7m") pod "fdeab9da-38e2-4c08-a4b8-36beee24b2d8" (UID: "fdeab9da-38e2-4c08-a4b8-36beee24b2d8"). InnerVolumeSpecName "kube-api-access-xrg7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.117855 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrg7m\" (UniqueName: \"kubernetes.io/projected/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-kube-api-access-xrg7m\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.117888 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.130086 4801 scope.go:117] "RemoveContainer" containerID="37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979" Dec 06 04:07:03 crc kubenswrapper[4801]: E1206 04:07:03.131856 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979\": container with ID starting with 37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979 not found: ID does not exist" containerID="37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.131888 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979"} err="failed to get container status \"37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979\": rpc error: code = NotFound desc = could not find container \"37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979\": container with ID starting with 37305c992ba64baf1c8d32337c29feba9bb066fc39ad068deaa7c3ec0580a979 not found: ID does not exist" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.131912 4801 scope.go:117] "RemoveContainer" containerID="e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7" Dec 06 04:07:03 crc kubenswrapper[4801]: E1206 04:07:03.132433 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7\": container with ID starting with e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7 not found: ID does not exist" containerID="e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.132490 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7"} err="failed to get container status \"e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7\": rpc error: code = NotFound desc = could not find container \"e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7\": container with ID starting with e35b29a95af5a3eb720248e276203e76baa874cc7def8c8bd6d1c6309e2779c7 not found: ID does not exist" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.132520 4801 scope.go:117] "RemoveContainer" containerID="9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30" Dec 06 04:07:03 crc kubenswrapper[4801]: E1206 04:07:03.132921 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30\": container with ID starting with 9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30 not found: ID does not exist" containerID="9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.132941 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30"} err="failed to get container status \"9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30\": rpc error: code = NotFound desc = could not find container \"9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30\": container with ID starting with 9f568cd8ea2beee658d5b4b2ba7f7f1707738a55b0b7c81620708cdaec3f1a30 not found: ID does not exist" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.133955 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdeab9da-38e2-4c08-a4b8-36beee24b2d8" (UID: "fdeab9da-38e2-4c08-a4b8-36beee24b2d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.229252 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdeab9da-38e2-4c08-a4b8-36beee24b2d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.363943 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q624v"] Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.372276 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q624v"] Dec 06 04:07:03 crc kubenswrapper[4801]: I1206 04:07:03.645519 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 04:07:05 crc kubenswrapper[4801]: I1206 04:07:05.064161 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f38b08ba-582a-45d7-a085-ccfa93f1a805","Type":"ContainerStarted","Data":"7366fae0b1f91a946a5f119748c725b243c646f029427f595c14ac1d0948a213"} Dec 06 04:07:05 crc kubenswrapper[4801]: I1206 04:07:05.085517 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.8054148249999997 podStartE2EDuration="55.085498724s" podCreationTimestamp="2025-12-06 04:06:10 +0000 UTC" firstStartedPulling="2025-12-06 04:06:12.363230864 +0000 UTC m=+3625.485838436" lastFinishedPulling="2025-12-06 04:07:03.643314763 +0000 UTC m=+3676.765922335" observedRunningTime="2025-12-06 04:07:05.082259387 +0000 UTC m=+3678.204866959" watchObservedRunningTime="2025-12-06 04:07:05.085498724 +0000 UTC m=+3678.208106296" Dec 06 04:07:05 crc kubenswrapper[4801]: I1206 04:07:05.222889 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" path="/var/lib/kubelet/pods/fdeab9da-38e2-4c08-a4b8-36beee24b2d8/volumes" Dec 06 04:07:06 crc kubenswrapper[4801]: I1206 04:07:06.950389 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:07:06 crc kubenswrapper[4801]: I1206 04:07:06.951114 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:07:06 crc kubenswrapper[4801]: I1206 04:07:06.998420 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:07:07 crc kubenswrapper[4801]: I1206 04:07:07.129275 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:07:10 crc kubenswrapper[4801]: I1206 04:07:10.116231 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:07:10 crc kubenswrapper[4801]: I1206 04:07:10.116777 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:07:10 crc kubenswrapper[4801]: I1206 04:07:10.182155 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.013214 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6t57"] Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.013449 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w6t57" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="registry-server" containerID="cri-o://c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc" gracePeriod=2 Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.169603 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.169853 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.180177 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.485686 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.613548 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rj6f\" (UniqueName: \"kubernetes.io/projected/02c1c2a8-8ef7-4937-9754-619641b82ba9-kube-api-access-2rj6f\") pod \"02c1c2a8-8ef7-4937-9754-619641b82ba9\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.613880 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-utilities\") pod \"02c1c2a8-8ef7-4937-9754-619641b82ba9\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.613978 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-catalog-content\") pod \"02c1c2a8-8ef7-4937-9754-619641b82ba9\" (UID: \"02c1c2a8-8ef7-4937-9754-619641b82ba9\") " Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.615026 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-utilities" (OuterVolumeSpecName: "utilities") pod "02c1c2a8-8ef7-4937-9754-619641b82ba9" (UID: "02c1c2a8-8ef7-4937-9754-619641b82ba9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.619882 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c1c2a8-8ef7-4937-9754-619641b82ba9-kube-api-access-2rj6f" (OuterVolumeSpecName: "kube-api-access-2rj6f") pod "02c1c2a8-8ef7-4937-9754-619641b82ba9" (UID: "02c1c2a8-8ef7-4937-9754-619641b82ba9"). InnerVolumeSpecName "kube-api-access-2rj6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.665717 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c1c2a8-8ef7-4937-9754-619641b82ba9" (UID: "02c1c2a8-8ef7-4937-9754-619641b82ba9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.716271 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.716308 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1c2a8-8ef7-4937-9754-619641b82ba9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:11 crc kubenswrapper[4801]: I1206 04:07:11.716320 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rj6f\" (UniqueName: \"kubernetes.io/projected/02c1c2a8-8ef7-4937-9754-619641b82ba9-kube-api-access-2rj6f\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.139872 4801 generic.go:334] "Generic (PLEG): container finished" podID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerID="c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc" exitCode=0 Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.139946 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerDied","Data":"c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc"} Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.139994 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6t57" event={"ID":"02c1c2a8-8ef7-4937-9754-619641b82ba9","Type":"ContainerDied","Data":"e180bb484b1d0880084fff01bde318690c43e314daa95fcc770ce502352b3173"} Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.140010 4801 scope.go:117] "RemoveContainer" containerID="c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.140352 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6t57" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.161346 4801 scope.go:117] "RemoveContainer" containerID="0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.176231 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6t57"] Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.184657 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w6t57"] Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.196775 4801 scope.go:117] "RemoveContainer" containerID="b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.233558 4801 scope.go:117] "RemoveContainer" containerID="c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc" Dec 06 04:07:12 crc kubenswrapper[4801]: E1206 04:07:12.234000 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc\": container with ID starting with c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc not found: ID does not exist" containerID="c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.234048 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc"} err="failed to get container status \"c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc\": rpc error: code = NotFound desc = could not find container \"c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc\": container with ID starting with c99d0f2ee24818a7c726c8717fd3ba8acb19003f60cdd9cd4bd63ff77c7509dc not found: ID does not exist" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.234074 4801 scope.go:117] "RemoveContainer" containerID="0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418" Dec 06 04:07:12 crc kubenswrapper[4801]: E1206 04:07:12.234421 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418\": container with ID starting with 0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418 not found: ID does not exist" containerID="0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.234479 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418"} err="failed to get container status \"0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418\": rpc error: code = NotFound desc = could not find container \"0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418\": container with ID starting with 0fb2f95489eb1e08f9970397af34f3e6072e04161f39776c131ec32348d2a418 not found: ID does not exist" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.234512 4801 scope.go:117] "RemoveContainer" containerID="b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240" Dec 06 04:07:12 crc kubenswrapper[4801]: E1206 04:07:12.235001 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240\": container with ID starting with b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240 not found: ID does not exist" containerID="b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240" Dec 06 04:07:12 crc kubenswrapper[4801]: I1206 04:07:12.235035 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240"} err="failed to get container status \"b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240\": rpc error: code = NotFound desc = could not find container \"b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240\": container with ID starting with b0e3e33d78256dc3c07d17c0f77a1ae16f3c7ed42d56ac14efbc5087f583f240 not found: ID does not exist" Dec 06 04:07:13 crc kubenswrapper[4801]: I1206 04:07:13.225562 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" path="/var/lib/kubelet/pods/02c1c2a8-8ef7-4937-9754-619641b82ba9/volumes" Dec 06 04:07:13 crc kubenswrapper[4801]: I1206 04:07:13.618829 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jg5wk"] Dec 06 04:07:13 crc kubenswrapper[4801]: I1206 04:07:13.619099 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jg5wk" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="registry-server" containerID="cri-o://60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9" gracePeriod=2 Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.066462 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.166039 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-catalog-content\") pod \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.166137 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-utilities\") pod \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.166269 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdgj7\" (UniqueName: \"kubernetes.io/projected/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-kube-api-access-vdgj7\") pod \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\" (UID: \"8c762bf5-78f4-4067-8f8e-3a4d9f04790b\") " Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.167535 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-utilities" (OuterVolumeSpecName: "utilities") pod "8c762bf5-78f4-4067-8f8e-3a4d9f04790b" (UID: "8c762bf5-78f4-4067-8f8e-3a4d9f04790b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.167794 4801 generic.go:334] "Generic (PLEG): container finished" podID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerID="60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9" exitCode=0 Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.167889 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerDied","Data":"60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9"} Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.167966 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jg5wk" event={"ID":"8c762bf5-78f4-4067-8f8e-3a4d9f04790b","Type":"ContainerDied","Data":"788ddf91fd48d42c09dda54cb5bd7674b2adfef0c18100bd51ad1b44af0aafe6"} Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.168289 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jg5wk" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.168920 4801 scope.go:117] "RemoveContainer" containerID="60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.177789 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-kube-api-access-vdgj7" (OuterVolumeSpecName: "kube-api-access-vdgj7") pod "8c762bf5-78f4-4067-8f8e-3a4d9f04790b" (UID: "8c762bf5-78f4-4067-8f8e-3a4d9f04790b"). InnerVolumeSpecName "kube-api-access-vdgj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.203584 4801 scope.go:117] "RemoveContainer" containerID="9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.239873 4801 scope.go:117] "RemoveContainer" containerID="92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.269204 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdgj7\" (UniqueName: \"kubernetes.io/projected/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-kube-api-access-vdgj7\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.269393 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.278469 4801 scope.go:117] "RemoveContainer" containerID="60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9" Dec 06 04:07:14 crc kubenswrapper[4801]: E1206 04:07:14.278939 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9\": container with ID starting with 60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9 not found: ID does not exist" containerID="60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.279038 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9"} err="failed to get container status \"60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9\": rpc error: code = NotFound desc = could not find container \"60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9\": container with ID starting with 60ee070f28402e08ef11e63514615870e830c8b7b3f63c4b37950ec107cbcec9 not found: ID does not exist" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.279125 4801 scope.go:117] "RemoveContainer" containerID="9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26" Dec 06 04:07:14 crc kubenswrapper[4801]: E1206 04:07:14.279578 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26\": container with ID starting with 9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26 not found: ID does not exist" containerID="9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.279615 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26"} err="failed to get container status \"9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26\": rpc error: code = NotFound desc = could not find container \"9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26\": container with ID starting with 9bde75c802d8faaf1fe05df4c4af7d1e6afc3d21aef3698037ed5b73295f4b26 not found: ID does not exist" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.279647 4801 scope.go:117] "RemoveContainer" containerID="92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0" Dec 06 04:07:14 crc kubenswrapper[4801]: E1206 04:07:14.280146 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0\": container with ID starting with 92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0 not found: ID does not exist" containerID="92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.280195 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0"} err="failed to get container status \"92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0\": rpc error: code = NotFound desc = could not find container \"92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0\": container with ID starting with 92001f0e4731ed9282ca4768c5451f80d2f083bf69fb6c80d7e01f5c922daaa0 not found: ID does not exist" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.282224 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c762bf5-78f4-4067-8f8e-3a4d9f04790b" (UID: "8c762bf5-78f4-4067-8f8e-3a4d9f04790b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.372540 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c762bf5-78f4-4067-8f8e-3a4d9f04790b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.504435 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jg5wk"] Dec 06 04:07:14 crc kubenswrapper[4801]: I1206 04:07:14.513350 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jg5wk"] Dec 06 04:07:15 crc kubenswrapper[4801]: I1206 04:07:15.224864 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" path="/var/lib/kubelet/pods/8c762bf5-78f4-4067-8f8e-3a4d9f04790b/volumes" Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.169961 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.170667 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.170721 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.171524 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.171570 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" gracePeriod=600 Dec 06 04:07:41 crc kubenswrapper[4801]: E1206 04:07:41.296700 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.478841 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" exitCode=0 Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.479238 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff"} Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.479277 4801 scope.go:117] "RemoveContainer" containerID="ce7d616b613d4c5b6c42a892b482868162d2c2cd72210b0f14d487fb878d9cbe" Dec 06 04:07:41 crc kubenswrapper[4801]: I1206 04:07:41.480080 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:07:41 crc kubenswrapper[4801]: E1206 04:07:41.480369 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:07:56 crc kubenswrapper[4801]: I1206 04:07:56.213919 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:07:56 crc kubenswrapper[4801]: E1206 04:07:56.216026 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:08:10 crc kubenswrapper[4801]: I1206 04:08:10.212027 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:08:10 crc kubenswrapper[4801]: E1206 04:08:10.213363 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:08:23 crc kubenswrapper[4801]: I1206 04:08:23.213013 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:08:23 crc kubenswrapper[4801]: E1206 04:08:23.214213 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:08:37 crc kubenswrapper[4801]: I1206 04:08:37.220052 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:08:37 crc kubenswrapper[4801]: E1206 04:08:37.220931 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:08:52 crc kubenswrapper[4801]: I1206 04:08:52.213175 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:08:52 crc kubenswrapper[4801]: E1206 04:08:52.214337 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:09:07 crc kubenswrapper[4801]: I1206 04:09:07.221393 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:09:07 crc kubenswrapper[4801]: E1206 04:09:07.222367 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:09:20 crc kubenswrapper[4801]: I1206 04:09:20.212306 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:09:20 crc kubenswrapper[4801]: E1206 04:09:20.213116 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:09:32 crc kubenswrapper[4801]: I1206 04:09:32.212467 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:09:32 crc kubenswrapper[4801]: E1206 04:09:32.213213 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:09:42 crc kubenswrapper[4801]: I1206 04:09:42.203000 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-554d4f888f-vn47n" podUID="87b90546-3593-40c2-9be7-84187756b4cf" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 06 04:09:46 crc kubenswrapper[4801]: I1206 04:09:46.213410 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:09:46 crc kubenswrapper[4801]: E1206 04:09:46.214676 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:09:57 crc kubenswrapper[4801]: I1206 04:09:57.217745 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:09:57 crc kubenswrapper[4801]: E1206 04:09:57.218633 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:10:12 crc kubenswrapper[4801]: I1206 04:10:12.212733 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:10:12 crc kubenswrapper[4801]: E1206 04:10:12.214808 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:10:26 crc kubenswrapper[4801]: I1206 04:10:26.212190 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:10:26 crc kubenswrapper[4801]: E1206 04:10:26.213049 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:10:38 crc kubenswrapper[4801]: I1206 04:10:38.212461 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:10:38 crc kubenswrapper[4801]: E1206 04:10:38.214495 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:10:51 crc kubenswrapper[4801]: I1206 04:10:51.213094 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:10:51 crc kubenswrapper[4801]: E1206 04:10:51.214158 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:11:02 crc kubenswrapper[4801]: I1206 04:11:02.214109 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:11:02 crc kubenswrapper[4801]: E1206 04:11:02.215807 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:11:15 crc kubenswrapper[4801]: I1206 04:11:15.212454 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:11:15 crc kubenswrapper[4801]: E1206 04:11:15.213245 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:11:28 crc kubenswrapper[4801]: I1206 04:11:28.212678 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:11:28 crc kubenswrapper[4801]: E1206 04:11:28.213530 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:11:43 crc kubenswrapper[4801]: I1206 04:11:43.212661 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:11:43 crc kubenswrapper[4801]: E1206 04:11:43.213373 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:11:58 crc kubenswrapper[4801]: I1206 04:11:58.212237 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:11:58 crc kubenswrapper[4801]: E1206 04:11:58.214153 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:12:13 crc kubenswrapper[4801]: I1206 04:12:13.212597 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:12:13 crc kubenswrapper[4801]: E1206 04:12:13.214572 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:12:25 crc kubenswrapper[4801]: I1206 04:12:25.212186 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:12:25 crc kubenswrapper[4801]: E1206 04:12:25.212964 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:12:40 crc kubenswrapper[4801]: I1206 04:12:40.212619 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:12:40 crc kubenswrapper[4801]: E1206 04:12:40.213692 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:12:54 crc kubenswrapper[4801]: I1206 04:12:54.212794 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:12:55 crc kubenswrapper[4801]: I1206 04:12:55.436065 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"73f33dfc4d2137223ad21fb068f21785eb92d8794fa99c5fffb378420e3a2eb4"} Dec 06 04:13:39 crc kubenswrapper[4801]: I1206 04:13:39.045328 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-9f8gs"] Dec 06 04:13:39 crc kubenswrapper[4801]: I1206 04:13:39.054617 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-0b98-account-create-update-qs4ph"] Dec 06 04:13:39 crc kubenswrapper[4801]: I1206 04:13:39.064022 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-9f8gs"] Dec 06 04:13:39 crc kubenswrapper[4801]: I1206 04:13:39.072386 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-0b98-account-create-update-qs4ph"] Dec 06 04:13:39 crc kubenswrapper[4801]: I1206 04:13:39.223221 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ba4993-d54d-4bc6-9250-b0a134e34d6d" path="/var/lib/kubelet/pods/48ba4993-d54d-4bc6-9250-b0a134e34d6d/volumes" Dec 06 04:13:39 crc kubenswrapper[4801]: I1206 04:13:39.224106 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e" path="/var/lib/kubelet/pods/5ad074fd-5a0d-4d62-8e5f-ec221ae6eb2e/volumes" Dec 06 04:14:14 crc kubenswrapper[4801]: I1206 04:14:14.052866 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-mk49l"] Dec 06 04:14:14 crc kubenswrapper[4801]: I1206 04:14:14.065196 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-mk49l"] Dec 06 04:14:15 crc kubenswrapper[4801]: I1206 04:14:15.223992 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34ed2fd1-0b46-478f-b8f6-013c6744778d" path="/var/lib/kubelet/pods/34ed2fd1-0b46-478f-b8f6-013c6744778d/volumes" Dec 06 04:14:22 crc kubenswrapper[4801]: I1206 04:14:22.230925 4801 scope.go:117] "RemoveContainer" containerID="6f8ebcfddcc43acd0ec256d80d7a437b11b70f76a371d9624ee8fb83eeefc6ca" Dec 06 04:14:22 crc kubenswrapper[4801]: I1206 04:14:22.278628 4801 scope.go:117] "RemoveContainer" containerID="d9d0dd3a375e338467248f6362e763fbb34b0adf484fcbfce079e72b4a53c3cb" Dec 06 04:14:22 crc kubenswrapper[4801]: I1206 04:14:22.319296 4801 scope.go:117] "RemoveContainer" containerID="c3e5140176a8bfed4bc86d7c2658aaae0323ef978c4f3fe2021b28dd8d61bd5c" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.162057 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2"] Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163207 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="extract-utilities" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163228 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="extract-utilities" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163258 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="extract-content" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163266 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="extract-content" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163283 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163291 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163305 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="extract-utilities" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163311 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="extract-utilities" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163321 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="extract-content" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163326 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="extract-content" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163342 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="extract-utilities" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163349 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="extract-utilities" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163366 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163374 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163381 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="extract-content" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163387 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="extract-content" Dec 06 04:15:00 crc kubenswrapper[4801]: E1206 04:15:00.163396 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163404 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163571 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c762bf5-78f4-4067-8f8e-3a4d9f04790b" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163601 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdeab9da-38e2-4c08-a4b8-36beee24b2d8" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.163609 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c1c2a8-8ef7-4937-9754-619641b82ba9" containerName="registry-server" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.164369 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.167178 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.168340 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.183266 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2"] Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.280914 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8ccx\" (UniqueName: \"kubernetes.io/projected/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-kube-api-access-x8ccx\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.281178 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-config-volume\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.281216 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-secret-volume\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.383592 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8ccx\" (UniqueName: \"kubernetes.io/projected/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-kube-api-access-x8ccx\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.383853 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-config-volume\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.383882 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-secret-volume\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.385036 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-config-volume\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.390695 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-secret-volume\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.402676 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8ccx\" (UniqueName: \"kubernetes.io/projected/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-kube-api-access-x8ccx\") pod \"collect-profiles-29416575-kqvn2\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:00 crc kubenswrapper[4801]: I1206 04:15:00.498117 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:01 crc kubenswrapper[4801]: I1206 04:15:00.999986 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2"] Dec 06 04:15:01 crc kubenswrapper[4801]: I1206 04:15:01.679199 4801 generic.go:334] "Generic (PLEG): container finished" podID="0fd672b2-0f0f-49e7-b1b1-3528916e8b84" containerID="55db853e02134b985083ac89f53cadc8177e83b4c3941931d92f178091020743" exitCode=0 Dec 06 04:15:01 crc kubenswrapper[4801]: I1206 04:15:01.679280 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" event={"ID":"0fd672b2-0f0f-49e7-b1b1-3528916e8b84","Type":"ContainerDied","Data":"55db853e02134b985083ac89f53cadc8177e83b4c3941931d92f178091020743"} Dec 06 04:15:01 crc kubenswrapper[4801]: I1206 04:15:01.679830 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" event={"ID":"0fd672b2-0f0f-49e7-b1b1-3528916e8b84","Type":"ContainerStarted","Data":"1dd1e38c2180b7e018a88e78b991b8fe130aef02bac953b05dcde9f89dc9ddfd"} Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.127534 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.240882 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8ccx\" (UniqueName: \"kubernetes.io/projected/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-kube-api-access-x8ccx\") pod \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.241657 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-config-volume\") pod \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.241694 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-secret-volume\") pod \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\" (UID: \"0fd672b2-0f0f-49e7-b1b1-3528916e8b84\") " Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.242354 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fd672b2-0f0f-49e7-b1b1-3528916e8b84" (UID: "0fd672b2-0f0f-49e7-b1b1-3528916e8b84"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.248956 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fd672b2-0f0f-49e7-b1b1-3528916e8b84" (UID: "0fd672b2-0f0f-49e7-b1b1-3528916e8b84"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.256231 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-kube-api-access-x8ccx" (OuterVolumeSpecName: "kube-api-access-x8ccx") pod "0fd672b2-0f0f-49e7-b1b1-3528916e8b84" (UID: "0fd672b2-0f0f-49e7-b1b1-3528916e8b84"). InnerVolumeSpecName "kube-api-access-x8ccx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.344441 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8ccx\" (UniqueName: \"kubernetes.io/projected/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-kube-api-access-x8ccx\") on node \"crc\" DevicePath \"\"" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.344678 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.344737 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fd672b2-0f0f-49e7-b1b1-3528916e8b84-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.697681 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" event={"ID":"0fd672b2-0f0f-49e7-b1b1-3528916e8b84","Type":"ContainerDied","Data":"1dd1e38c2180b7e018a88e78b991b8fe130aef02bac953b05dcde9f89dc9ddfd"} Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.698119 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd1e38c2180b7e018a88e78b991b8fe130aef02bac953b05dcde9f89dc9ddfd" Dec 06 04:15:03 crc kubenswrapper[4801]: I1206 04:15:03.697887 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416575-kqvn2" Dec 06 04:15:04 crc kubenswrapper[4801]: I1206 04:15:04.219541 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn"] Dec 06 04:15:04 crc kubenswrapper[4801]: I1206 04:15:04.228223 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416530-tpmtn"] Dec 06 04:15:05 crc kubenswrapper[4801]: I1206 04:15:05.240607 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae" path="/var/lib/kubelet/pods/6e5f3e1a-d7a7-4b15-b36b-be2aafcb43ae/volumes" Dec 06 04:15:11 crc kubenswrapper[4801]: I1206 04:15:11.169493 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:15:11 crc kubenswrapper[4801]: I1206 04:15:11.170101 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:15:22 crc kubenswrapper[4801]: I1206 04:15:22.429737 4801 scope.go:117] "RemoveContainer" containerID="cbeea869cfbb8ef3307d96b25a5bd1a5e45865480bb47fbb13ebfa5fdfcbb5ff" Dec 06 04:15:41 crc kubenswrapper[4801]: I1206 04:15:41.169466 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:15:41 crc kubenswrapper[4801]: I1206 04:15:41.169988 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.169787 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.170312 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.170358 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.171136 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73f33dfc4d2137223ad21fb068f21785eb92d8794fa99c5fffb378420e3a2eb4"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.171203 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://73f33dfc4d2137223ad21fb068f21785eb92d8794fa99c5fffb378420e3a2eb4" gracePeriod=600 Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.380555 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="73f33dfc4d2137223ad21fb068f21785eb92d8794fa99c5fffb378420e3a2eb4" exitCode=0 Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.380826 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"73f33dfc4d2137223ad21fb068f21785eb92d8794fa99c5fffb378420e3a2eb4"} Dec 06 04:16:11 crc kubenswrapper[4801]: I1206 04:16:11.380861 4801 scope.go:117] "RemoveContainer" containerID="e57b748ca07cb6d48775adce94230191da443cdb41260d8cc7cd5098f3da31ff" Dec 06 04:16:12 crc kubenswrapper[4801]: I1206 04:16:12.391502 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c"} Dec 06 04:17:20 crc kubenswrapper[4801]: I1206 04:17:20.997583 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4s9lv"] Dec 06 04:17:21 crc kubenswrapper[4801]: E1206 04:17:20.999023 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd672b2-0f0f-49e7-b1b1-3528916e8b84" containerName="collect-profiles" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:20.999051 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd672b2-0f0f-49e7-b1b1-3528916e8b84" containerName="collect-profiles" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.000720 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd672b2-0f0f-49e7-b1b1-3528916e8b84" containerName="collect-profiles" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.004882 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.008872 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s9lv"] Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.093072 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-utilities\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.093171 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-catalog-content\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.093248 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxfcz\" (UniqueName: \"kubernetes.io/projected/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-kube-api-access-vxfcz\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.194838 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxfcz\" (UniqueName: \"kubernetes.io/projected/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-kube-api-access-vxfcz\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.194971 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-utilities\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.195022 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-catalog-content\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.195803 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-utilities\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.195817 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-catalog-content\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.219415 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxfcz\" (UniqueName: \"kubernetes.io/projected/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-kube-api-access-vxfcz\") pod \"redhat-operators-4s9lv\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.331597 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:21 crc kubenswrapper[4801]: I1206 04:17:21.854283 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4s9lv"] Dec 06 04:17:22 crc kubenswrapper[4801]: I1206 04:17:22.087964 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerStarted","Data":"0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69"} Dec 06 04:17:22 crc kubenswrapper[4801]: I1206 04:17:22.088378 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerStarted","Data":"d17582c442f49f28f679ae85b2f283c64f6aa91a18c2f09d206d0e5a4cb60284"} Dec 06 04:17:22 crc kubenswrapper[4801]: I1206 04:17:22.091088 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 04:17:23 crc kubenswrapper[4801]: I1206 04:17:23.109596 4801 generic.go:334] "Generic (PLEG): container finished" podID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerID="0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69" exitCode=0 Dec 06 04:17:23 crc kubenswrapper[4801]: I1206 04:17:23.109865 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerDied","Data":"0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69"} Dec 06 04:17:24 crc kubenswrapper[4801]: I1206 04:17:24.130536 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerStarted","Data":"3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7"} Dec 06 04:17:25 crc kubenswrapper[4801]: I1206 04:17:25.140932 4801 generic.go:334] "Generic (PLEG): container finished" podID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerID="3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7" exitCode=0 Dec 06 04:17:25 crc kubenswrapper[4801]: I1206 04:17:25.141010 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerDied","Data":"3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7"} Dec 06 04:17:26 crc kubenswrapper[4801]: I1206 04:17:26.152833 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerStarted","Data":"7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8"} Dec 06 04:17:26 crc kubenswrapper[4801]: I1206 04:17:26.185267 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4s9lv" podStartSLOduration=2.720894871 podStartE2EDuration="6.185246505s" podCreationTimestamp="2025-12-06 04:17:20 +0000 UTC" firstStartedPulling="2025-12-06 04:17:22.090805695 +0000 UTC m=+4295.213413267" lastFinishedPulling="2025-12-06 04:17:25.555157319 +0000 UTC m=+4298.677764901" observedRunningTime="2025-12-06 04:17:26.177109165 +0000 UTC m=+4299.299716747" watchObservedRunningTime="2025-12-06 04:17:26.185246505 +0000 UTC m=+4299.307854077" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.632844 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9p7"] Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.650257 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.659525 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9p7"] Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.768280 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-catalog-content\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.768625 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-utilities\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.768684 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgzcz\" (UniqueName: \"kubernetes.io/projected/554fc394-f9e9-4932-822f-ab6498e6a883-kube-api-access-xgzcz\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.870114 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-catalog-content\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.870183 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-utilities\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.870293 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgzcz\" (UniqueName: \"kubernetes.io/projected/554fc394-f9e9-4932-822f-ab6498e6a883-kube-api-access-xgzcz\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.870800 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-catalog-content\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.870837 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-utilities\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.899888 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgzcz\" (UniqueName: \"kubernetes.io/projected/554fc394-f9e9-4932-822f-ab6498e6a883-kube-api-access-xgzcz\") pod \"redhat-marketplace-rs9p7\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:29 crc kubenswrapper[4801]: I1206 04:17:29.980165 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:30 crc kubenswrapper[4801]: I1206 04:17:30.506782 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9p7"] Dec 06 04:17:30 crc kubenswrapper[4801]: W1206 04:17:30.510422 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod554fc394_f9e9_4932_822f_ab6498e6a883.slice/crio-6fa4e1323db7f3623137987f74e636efc1eae39a24814c35a85c84f977060803 WatchSource:0}: Error finding container 6fa4e1323db7f3623137987f74e636efc1eae39a24814c35a85c84f977060803: Status 404 returned error can't find the container with id 6fa4e1323db7f3623137987f74e636efc1eae39a24814c35a85c84f977060803 Dec 06 04:17:31 crc kubenswrapper[4801]: I1206 04:17:31.226701 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9p7" event={"ID":"554fc394-f9e9-4932-822f-ab6498e6a883","Type":"ContainerStarted","Data":"6fa4e1323db7f3623137987f74e636efc1eae39a24814c35a85c84f977060803"} Dec 06 04:17:31 crc kubenswrapper[4801]: I1206 04:17:31.332087 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:31 crc kubenswrapper[4801]: I1206 04:17:31.332411 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:31 crc kubenswrapper[4801]: I1206 04:17:31.382244 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:32 crc kubenswrapper[4801]: I1206 04:17:32.226299 4801 generic.go:334] "Generic (PLEG): container finished" podID="554fc394-f9e9-4932-822f-ab6498e6a883" containerID="50843f78e5c204d30364d20c939e09a1af14a6d99e8868bcefc0b642eac559de" exitCode=0 Dec 06 04:17:32 crc kubenswrapper[4801]: I1206 04:17:32.226409 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9p7" event={"ID":"554fc394-f9e9-4932-822f-ab6498e6a883","Type":"ContainerDied","Data":"50843f78e5c204d30364d20c939e09a1af14a6d99e8868bcefc0b642eac559de"} Dec 06 04:17:32 crc kubenswrapper[4801]: I1206 04:17:32.299529 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:33 crc kubenswrapper[4801]: I1206 04:17:33.608029 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s9lv"] Dec 06 04:17:34 crc kubenswrapper[4801]: E1206 04:17:34.172732 4801 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod554fc394_f9e9_4932_822f_ab6498e6a883.slice/crio-8bec68cbb2cec1c0fafac2ffa49aba6314bae4f10e54d55a7c1cf123c65f5629.scope\": RecentStats: unable to find data in memory cache]" Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.253575 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9p7" event={"ID":"554fc394-f9e9-4932-822f-ab6498e6a883","Type":"ContainerDied","Data":"8bec68cbb2cec1c0fafac2ffa49aba6314bae4f10e54d55a7c1cf123c65f5629"} Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.253985 4801 generic.go:334] "Generic (PLEG): container finished" podID="554fc394-f9e9-4932-822f-ab6498e6a883" containerID="8bec68cbb2cec1c0fafac2ffa49aba6314bae4f10e54d55a7c1cf123c65f5629" exitCode=0 Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.254249 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4s9lv" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="registry-server" containerID="cri-o://7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8" gracePeriod=2 Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.895987 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.995331 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxfcz\" (UniqueName: \"kubernetes.io/projected/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-kube-api-access-vxfcz\") pod \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.995558 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-utilities\") pod \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.995998 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-catalog-content\") pod \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\" (UID: \"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6\") " Dec 06 04:17:34 crc kubenswrapper[4801]: I1206 04:17:34.996900 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-utilities" (OuterVolumeSpecName: "utilities") pod "d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" (UID: "d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.017068 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-kube-api-access-vxfcz" (OuterVolumeSpecName: "kube-api-access-vxfcz") pod "d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" (UID: "d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6"). InnerVolumeSpecName "kube-api-access-vxfcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.098920 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.098960 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxfcz\" (UniqueName: \"kubernetes.io/projected/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-kube-api-access-vxfcz\") on node \"crc\" DevicePath \"\"" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.131063 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" (UID: "d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.200221 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.266532 4801 generic.go:334] "Generic (PLEG): container finished" podID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerID="7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8" exitCode=0 Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.266585 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerDied","Data":"7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8"} Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.266622 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4s9lv" event={"ID":"d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6","Type":"ContainerDied","Data":"d17582c442f49f28f679ae85b2f283c64f6aa91a18c2f09d206d0e5a4cb60284"} Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.266648 4801 scope.go:117] "RemoveContainer" containerID="7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.266856 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4s9lv" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.303093 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4s9lv"] Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.317233 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4s9lv"] Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.327100 4801 scope.go:117] "RemoveContainer" containerID="3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.567415 4801 scope.go:117] "RemoveContainer" containerID="0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.623138 4801 scope.go:117] "RemoveContainer" containerID="7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8" Dec 06 04:17:35 crc kubenswrapper[4801]: E1206 04:17:35.623624 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8\": container with ID starting with 7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8 not found: ID does not exist" containerID="7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.623666 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8"} err="failed to get container status \"7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8\": rpc error: code = NotFound desc = could not find container \"7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8\": container with ID starting with 7fa8dbff1eb0d7fee4a806c42389bd962100b8bee3e3a1ac19c8e9b3a9321bb8 not found: ID does not exist" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.623700 4801 scope.go:117] "RemoveContainer" containerID="3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7" Dec 06 04:17:35 crc kubenswrapper[4801]: E1206 04:17:35.624100 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7\": container with ID starting with 3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7 not found: ID does not exist" containerID="3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.624123 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7"} err="failed to get container status \"3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7\": rpc error: code = NotFound desc = could not find container \"3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7\": container with ID starting with 3a162dd3e56595eb2745ed02deeeb40760e48cd3dba5060a8426f1cae49f8fb7 not found: ID does not exist" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.624137 4801 scope.go:117] "RemoveContainer" containerID="0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69" Dec 06 04:17:35 crc kubenswrapper[4801]: E1206 04:17:35.624425 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69\": container with ID starting with 0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69 not found: ID does not exist" containerID="0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69" Dec 06 04:17:35 crc kubenswrapper[4801]: I1206 04:17:35.624483 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69"} err="failed to get container status \"0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69\": rpc error: code = NotFound desc = could not find container \"0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69\": container with ID starting with 0db9c1e4a013fc2fdccc52e9918eb19dc18541e059dfb76b1b239be269fb0b69 not found: ID does not exist" Dec 06 04:17:36 crc kubenswrapper[4801]: I1206 04:17:36.289534 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9p7" event={"ID":"554fc394-f9e9-4932-822f-ab6498e6a883","Type":"ContainerStarted","Data":"5b23361305bfc1016cb10685c174c8c06d127e28ad085f4e3e272c4fac9f7b89"} Dec 06 04:17:36 crc kubenswrapper[4801]: I1206 04:17:36.322733 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rs9p7" podStartSLOduration=3.710894135 podStartE2EDuration="7.322711878s" podCreationTimestamp="2025-12-06 04:17:29 +0000 UTC" firstStartedPulling="2025-12-06 04:17:32.227742004 +0000 UTC m=+4305.350349576" lastFinishedPulling="2025-12-06 04:17:35.839559757 +0000 UTC m=+4308.962167319" observedRunningTime="2025-12-06 04:17:36.312213745 +0000 UTC m=+4309.434821357" watchObservedRunningTime="2025-12-06 04:17:36.322711878 +0000 UTC m=+4309.445319450" Dec 06 04:17:37 crc kubenswrapper[4801]: I1206 04:17:37.223658 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" path="/var/lib/kubelet/pods/d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6/volumes" Dec 06 04:17:39 crc kubenswrapper[4801]: I1206 04:17:39.980326 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:39 crc kubenswrapper[4801]: I1206 04:17:39.982786 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:40 crc kubenswrapper[4801]: I1206 04:17:40.038083 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:41 crc kubenswrapper[4801]: I1206 04:17:41.384141 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:41 crc kubenswrapper[4801]: I1206 04:17:41.433804 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9p7"] Dec 06 04:17:43 crc kubenswrapper[4801]: I1206 04:17:43.351213 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rs9p7" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="registry-server" containerID="cri-o://5b23361305bfc1016cb10685c174c8c06d127e28ad085f4e3e272c4fac9f7b89" gracePeriod=2 Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.364637 4801 generic.go:334] "Generic (PLEG): container finished" podID="554fc394-f9e9-4932-822f-ab6498e6a883" containerID="5b23361305bfc1016cb10685c174c8c06d127e28ad085f4e3e272c4fac9f7b89" exitCode=0 Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.364850 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9p7" event={"ID":"554fc394-f9e9-4932-822f-ab6498e6a883","Type":"ContainerDied","Data":"5b23361305bfc1016cb10685c174c8c06d127e28ad085f4e3e272c4fac9f7b89"} Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.660482 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.826648 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-catalog-content\") pod \"554fc394-f9e9-4932-822f-ab6498e6a883\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.827032 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-utilities\") pod \"554fc394-f9e9-4932-822f-ab6498e6a883\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.827234 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgzcz\" (UniqueName: \"kubernetes.io/projected/554fc394-f9e9-4932-822f-ab6498e6a883-kube-api-access-xgzcz\") pod \"554fc394-f9e9-4932-822f-ab6498e6a883\" (UID: \"554fc394-f9e9-4932-822f-ab6498e6a883\") " Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.828166 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-utilities" (OuterVolumeSpecName: "utilities") pod "554fc394-f9e9-4932-822f-ab6498e6a883" (UID: "554fc394-f9e9-4932-822f-ab6498e6a883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.833675 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/554fc394-f9e9-4932-822f-ab6498e6a883-kube-api-access-xgzcz" (OuterVolumeSpecName: "kube-api-access-xgzcz") pod "554fc394-f9e9-4932-822f-ab6498e6a883" (UID: "554fc394-f9e9-4932-822f-ab6498e6a883"). InnerVolumeSpecName "kube-api-access-xgzcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.857242 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "554fc394-f9e9-4932-822f-ab6498e6a883" (UID: "554fc394-f9e9-4932-822f-ab6498e6a883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.929503 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgzcz\" (UniqueName: \"kubernetes.io/projected/554fc394-f9e9-4932-822f-ab6498e6a883-kube-api-access-xgzcz\") on node \"crc\" DevicePath \"\"" Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.930129 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:17:44 crc kubenswrapper[4801]: I1206 04:17:44.930194 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/554fc394-f9e9-4932-822f-ab6498e6a883-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.376125 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9p7" event={"ID":"554fc394-f9e9-4932-822f-ab6498e6a883","Type":"ContainerDied","Data":"6fa4e1323db7f3623137987f74e636efc1eae39a24814c35a85c84f977060803"} Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.376175 4801 scope.go:117] "RemoveContainer" containerID="5b23361305bfc1016cb10685c174c8c06d127e28ad085f4e3e272c4fac9f7b89" Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.376336 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9p7" Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.400347 4801 scope.go:117] "RemoveContainer" containerID="8bec68cbb2cec1c0fafac2ffa49aba6314bae4f10e54d55a7c1cf123c65f5629" Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.404242 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9p7"] Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.423932 4801 scope.go:117] "RemoveContainer" containerID="50843f78e5c204d30364d20c939e09a1af14a6d99e8868bcefc0b642eac559de" Dec 06 04:17:45 crc kubenswrapper[4801]: I1206 04:17:45.434775 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9p7"] Dec 06 04:17:47 crc kubenswrapper[4801]: I1206 04:17:47.223147 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" path="/var/lib/kubelet/pods/554fc394-f9e9-4932-822f-ab6498e6a883/volumes" Dec 06 04:18:11 crc kubenswrapper[4801]: I1206 04:18:11.169518 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:18:11 crc kubenswrapper[4801]: I1206 04:18:11.170266 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:18:41 crc kubenswrapper[4801]: I1206 04:18:41.169573 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:18:41 crc kubenswrapper[4801]: I1206 04:18:41.170131 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.169914 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.170424 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.170470 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.171240 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.171292 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" gracePeriod=600 Dec 06 04:19:11 crc kubenswrapper[4801]: E1206 04:19:11.298071 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.367160 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" exitCode=0 Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.367208 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c"} Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.367277 4801 scope.go:117] "RemoveContainer" containerID="73f33dfc4d2137223ad21fb068f21785eb92d8794fa99c5fffb378420e3a2eb4" Dec 06 04:19:11 crc kubenswrapper[4801]: I1206 04:19:11.368224 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:19:11 crc kubenswrapper[4801]: E1206 04:19:11.368657 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:19:25 crc kubenswrapper[4801]: I1206 04:19:25.212612 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:19:25 crc kubenswrapper[4801]: E1206 04:19:25.213477 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:19:37 crc kubenswrapper[4801]: I1206 04:19:37.219142 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:19:37 crc kubenswrapper[4801]: E1206 04:19:37.220035 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:19:51 crc kubenswrapper[4801]: I1206 04:19:51.212988 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:19:51 crc kubenswrapper[4801]: E1206 04:19:51.214148 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:20:06 crc kubenswrapper[4801]: I1206 04:20:06.212724 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:20:06 crc kubenswrapper[4801]: E1206 04:20:06.214210 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:20:20 crc kubenswrapper[4801]: I1206 04:20:20.214717 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:20:20 crc kubenswrapper[4801]: E1206 04:20:20.216747 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.708221 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nh2db"] Dec 06 04:20:28 crc kubenswrapper[4801]: E1206 04:20:28.710062 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="registry-server" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710082 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="registry-server" Dec 06 04:20:28 crc kubenswrapper[4801]: E1206 04:20:28.710109 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="extract-content" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710116 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="extract-content" Dec 06 04:20:28 crc kubenswrapper[4801]: E1206 04:20:28.710125 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="extract-utilities" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710132 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="extract-utilities" Dec 06 04:20:28 crc kubenswrapper[4801]: E1206 04:20:28.710148 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="extract-content" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710157 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="extract-content" Dec 06 04:20:28 crc kubenswrapper[4801]: E1206 04:20:28.710170 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="extract-utilities" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710178 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="extract-utilities" Dec 06 04:20:28 crc kubenswrapper[4801]: E1206 04:20:28.710202 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="registry-server" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710209 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="registry-server" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710406 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="554fc394-f9e9-4932-822f-ab6498e6a883" containerName="registry-server" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.710432 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19ff4b8-7f50-4eb4-b64f-54ba8b03c6e6" containerName="registry-server" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.712851 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.722670 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nh2db"] Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.781663 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-utilities\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.781737 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262kf\" (UniqueName: \"kubernetes.io/projected/04fa43eb-f0c7-4bc0-887c-a4ad76881169-kube-api-access-262kf\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.781924 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-catalog-content\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.884797 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-utilities\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.884883 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-262kf\" (UniqueName: \"kubernetes.io/projected/04fa43eb-f0c7-4bc0-887c-a4ad76881169-kube-api-access-262kf\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.885020 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-catalog-content\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.885588 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-catalog-content\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.885890 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-utilities\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.903199 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-224mn"] Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.905728 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.912348 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-262kf\" (UniqueName: \"kubernetes.io/projected/04fa43eb-f0c7-4bc0-887c-a4ad76881169-kube-api-access-262kf\") pod \"certified-operators-nh2db\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.924495 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-224mn"] Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.987330 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-utilities\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.987793 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdpv\" (UniqueName: \"kubernetes.io/projected/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-kube-api-access-mwdpv\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:28 crc kubenswrapper[4801]: I1206 04:20:28.987864 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-catalog-content\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.039604 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.090675 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-utilities\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.090792 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdpv\" (UniqueName: \"kubernetes.io/projected/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-kube-api-access-mwdpv\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.090821 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-catalog-content\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.091342 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-utilities\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.091629 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-catalog-content\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.110433 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdpv\" (UniqueName: \"kubernetes.io/projected/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-kube-api-access-mwdpv\") pod \"community-operators-224mn\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.282711 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.650219 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nh2db"] Dec 06 04:20:29 crc kubenswrapper[4801]: I1206 04:20:29.852679 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-224mn"] Dec 06 04:20:30 crc kubenswrapper[4801]: I1206 04:20:30.080053 4801 generic.go:334] "Generic (PLEG): container finished" podID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerID="e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0" exitCode=0 Dec 06 04:20:30 crc kubenswrapper[4801]: I1206 04:20:30.080175 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nh2db" event={"ID":"04fa43eb-f0c7-4bc0-887c-a4ad76881169","Type":"ContainerDied","Data":"e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0"} Dec 06 04:20:30 crc kubenswrapper[4801]: I1206 04:20:30.080428 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nh2db" event={"ID":"04fa43eb-f0c7-4bc0-887c-a4ad76881169","Type":"ContainerStarted","Data":"235b4ad4d0c95dcab3437f60ddf772f98855ecc69890c8e8ff32c4084625cc45"} Dec 06 04:20:30 crc kubenswrapper[4801]: I1206 04:20:30.084769 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-224mn" event={"ID":"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888","Type":"ContainerStarted","Data":"6495cc0488fa34ae4ce13031ed128fccabab0c6a576e45551b698c618374ee97"} Dec 06 04:20:31 crc kubenswrapper[4801]: I1206 04:20:31.118006 4801 generic.go:334] "Generic (PLEG): container finished" podID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerID="9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d" exitCode=0 Dec 06 04:20:31 crc kubenswrapper[4801]: I1206 04:20:31.118352 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-224mn" event={"ID":"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888","Type":"ContainerDied","Data":"9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d"} Dec 06 04:20:32 crc kubenswrapper[4801]: I1206 04:20:32.133286 4801 generic.go:334] "Generic (PLEG): container finished" podID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerID="9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d" exitCode=0 Dec 06 04:20:32 crc kubenswrapper[4801]: I1206 04:20:32.133491 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nh2db" event={"ID":"04fa43eb-f0c7-4bc0-887c-a4ad76881169","Type":"ContainerDied","Data":"9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d"} Dec 06 04:20:33 crc kubenswrapper[4801]: I1206 04:20:33.145282 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nh2db" event={"ID":"04fa43eb-f0c7-4bc0-887c-a4ad76881169","Type":"ContainerStarted","Data":"b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889"} Dec 06 04:20:33 crc kubenswrapper[4801]: I1206 04:20:33.147092 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-224mn" event={"ID":"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888","Type":"ContainerDied","Data":"2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530"} Dec 06 04:20:33 crc kubenswrapper[4801]: I1206 04:20:33.146998 4801 generic.go:334] "Generic (PLEG): container finished" podID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerID="2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530" exitCode=0 Dec 06 04:20:33 crc kubenswrapper[4801]: I1206 04:20:33.171287 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nh2db" podStartSLOduration=2.664747277 podStartE2EDuration="5.171254099s" podCreationTimestamp="2025-12-06 04:20:28 +0000 UTC" firstStartedPulling="2025-12-06 04:20:30.08319108 +0000 UTC m=+4483.205798652" lastFinishedPulling="2025-12-06 04:20:32.589697892 +0000 UTC m=+4485.712305474" observedRunningTime="2025-12-06 04:20:33.16940633 +0000 UTC m=+4486.292013902" watchObservedRunningTime="2025-12-06 04:20:33.171254099 +0000 UTC m=+4486.293861671" Dec 06 04:20:34 crc kubenswrapper[4801]: I1206 04:20:34.161949 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-224mn" event={"ID":"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888","Type":"ContainerStarted","Data":"243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a"} Dec 06 04:20:34 crc kubenswrapper[4801]: I1206 04:20:34.191173 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-224mn" podStartSLOduration=3.752801311 podStartE2EDuration="6.191148628s" podCreationTimestamp="2025-12-06 04:20:28 +0000 UTC" firstStartedPulling="2025-12-06 04:20:31.120897988 +0000 UTC m=+4484.243505560" lastFinishedPulling="2025-12-06 04:20:33.559245305 +0000 UTC m=+4486.681852877" observedRunningTime="2025-12-06 04:20:34.186125423 +0000 UTC m=+4487.308733035" watchObservedRunningTime="2025-12-06 04:20:34.191148628 +0000 UTC m=+4487.313756200" Dec 06 04:20:34 crc kubenswrapper[4801]: I1206 04:20:34.214502 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:20:34 crc kubenswrapper[4801]: E1206 04:20:34.214865 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.039708 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.040209 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.092345 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.277054 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.284191 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.284227 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.335665 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nh2db"] Dec 06 04:20:39 crc kubenswrapper[4801]: I1206 04:20:39.347264 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:40 crc kubenswrapper[4801]: I1206 04:20:40.308741 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:41 crc kubenswrapper[4801]: I1206 04:20:41.257072 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nh2db" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="registry-server" containerID="cri-o://b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889" gracePeriod=2 Dec 06 04:20:41 crc kubenswrapper[4801]: I1206 04:20:41.742579 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-224mn"] Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.004027 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.140547 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-262kf\" (UniqueName: \"kubernetes.io/projected/04fa43eb-f0c7-4bc0-887c-a4ad76881169-kube-api-access-262kf\") pod \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.140938 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-utilities\") pod \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.140963 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-catalog-content\") pod \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\" (UID: \"04fa43eb-f0c7-4bc0-887c-a4ad76881169\") " Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.141868 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-utilities" (OuterVolumeSpecName: "utilities") pod "04fa43eb-f0c7-4bc0-887c-a4ad76881169" (UID: "04fa43eb-f0c7-4bc0-887c-a4ad76881169"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.148410 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fa43eb-f0c7-4bc0-887c-a4ad76881169-kube-api-access-262kf" (OuterVolumeSpecName: "kube-api-access-262kf") pod "04fa43eb-f0c7-4bc0-887c-a4ad76881169" (UID: "04fa43eb-f0c7-4bc0-887c-a4ad76881169"). InnerVolumeSpecName "kube-api-access-262kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.196986 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04fa43eb-f0c7-4bc0-887c-a4ad76881169" (UID: "04fa43eb-f0c7-4bc0-887c-a4ad76881169"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.242894 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-262kf\" (UniqueName: \"kubernetes.io/projected/04fa43eb-f0c7-4bc0-887c-a4ad76881169-kube-api-access-262kf\") on node \"crc\" DevicePath \"\"" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.242944 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.242954 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04fa43eb-f0c7-4bc0-887c-a4ad76881169-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.277455 4801 generic.go:334] "Generic (PLEG): container finished" podID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerID="b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889" exitCode=0 Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.277657 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nh2db" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.277720 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-224mn" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="registry-server" containerID="cri-o://243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a" gracePeriod=2 Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.277872 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nh2db" event={"ID":"04fa43eb-f0c7-4bc0-887c-a4ad76881169","Type":"ContainerDied","Data":"b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889"} Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.277906 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nh2db" event={"ID":"04fa43eb-f0c7-4bc0-887c-a4ad76881169","Type":"ContainerDied","Data":"235b4ad4d0c95dcab3437f60ddf772f98855ecc69890c8e8ff32c4084625cc45"} Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.277925 4801 scope.go:117] "RemoveContainer" containerID="b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.337473 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nh2db"] Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.338118 4801 scope.go:117] "RemoveContainer" containerID="9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.346963 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nh2db"] Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.393461 4801 scope.go:117] "RemoveContainer" containerID="e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.438159 4801 scope.go:117] "RemoveContainer" containerID="b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889" Dec 06 04:20:42 crc kubenswrapper[4801]: E1206 04:20:42.438857 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889\": container with ID starting with b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889 not found: ID does not exist" containerID="b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.438921 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889"} err="failed to get container status \"b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889\": rpc error: code = NotFound desc = could not find container \"b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889\": container with ID starting with b195526899f4420168e0a033e6ed9b5086faabe40f2370613e5a7de1d6d39889 not found: ID does not exist" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.438963 4801 scope.go:117] "RemoveContainer" containerID="9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d" Dec 06 04:20:42 crc kubenswrapper[4801]: E1206 04:20:42.439945 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d\": container with ID starting with 9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d not found: ID does not exist" containerID="9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.440011 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d"} err="failed to get container status \"9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d\": rpc error: code = NotFound desc = could not find container \"9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d\": container with ID starting with 9b034c0f684bbe913b9ffa3db397e7b4430babc82e6e7ab20711f234c2976e1d not found: ID does not exist" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.440064 4801 scope.go:117] "RemoveContainer" containerID="e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0" Dec 06 04:20:42 crc kubenswrapper[4801]: E1206 04:20:42.441816 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0\": container with ID starting with e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0 not found: ID does not exist" containerID="e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.441854 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0"} err="failed to get container status \"e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0\": rpc error: code = NotFound desc = could not find container \"e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0\": container with ID starting with e66babc0307979435eebf96b71aa532cc513de6ab9550b10edb412c463ec9de0 not found: ID does not exist" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.846232 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.964238 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdpv\" (UniqueName: \"kubernetes.io/projected/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-kube-api-access-mwdpv\") pod \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.964476 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-catalog-content\") pod \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.964527 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-utilities\") pod \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\" (UID: \"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888\") " Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.967411 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-utilities" (OuterVolumeSpecName: "utilities") pod "c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" (UID: "c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:20:42 crc kubenswrapper[4801]: I1206 04:20:42.970218 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-kube-api-access-mwdpv" (OuterVolumeSpecName: "kube-api-access-mwdpv") pod "c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" (UID: "c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888"). InnerVolumeSpecName "kube-api-access-mwdpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.015479 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" (UID: "c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.068791 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdpv\" (UniqueName: \"kubernetes.io/projected/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-kube-api-access-mwdpv\") on node \"crc\" DevicePath \"\"" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.068846 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.068863 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.225156 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" path="/var/lib/kubelet/pods/04fa43eb-f0c7-4bc0-887c-a4ad76881169/volumes" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.289438 4801 generic.go:334] "Generic (PLEG): container finished" podID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerID="243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a" exitCode=0 Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.289500 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-224mn" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.289513 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-224mn" event={"ID":"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888","Type":"ContainerDied","Data":"243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a"} Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.290016 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-224mn" event={"ID":"c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888","Type":"ContainerDied","Data":"6495cc0488fa34ae4ce13031ed128fccabab0c6a576e45551b698c618374ee97"} Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.290039 4801 scope.go:117] "RemoveContainer" containerID="243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.318298 4801 scope.go:117] "RemoveContainer" containerID="2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.323597 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-224mn"] Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.334224 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-224mn"] Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.753342 4801 scope.go:117] "RemoveContainer" containerID="9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.918639 4801 scope.go:117] "RemoveContainer" containerID="243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a" Dec 06 04:20:43 crc kubenswrapper[4801]: E1206 04:20:43.919886 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a\": container with ID starting with 243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a not found: ID does not exist" containerID="243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.919925 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a"} err="failed to get container status \"243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a\": rpc error: code = NotFound desc = could not find container \"243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a\": container with ID starting with 243431cc38e0272a728a6245503e55ec31d0011e953786ebb03f9d04bc7e488a not found: ID does not exist" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.919954 4801 scope.go:117] "RemoveContainer" containerID="2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530" Dec 06 04:20:43 crc kubenswrapper[4801]: E1206 04:20:43.921270 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530\": container with ID starting with 2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530 not found: ID does not exist" containerID="2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.921302 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530"} err="failed to get container status \"2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530\": rpc error: code = NotFound desc = could not find container \"2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530\": container with ID starting with 2e37841e69434abb1aaca3433a74e362645270cb04bbacc901fcd39b32b5d530 not found: ID does not exist" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.921318 4801 scope.go:117] "RemoveContainer" containerID="9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d" Dec 06 04:20:43 crc kubenswrapper[4801]: E1206 04:20:43.922449 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d\": container with ID starting with 9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d not found: ID does not exist" containerID="9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d" Dec 06 04:20:43 crc kubenswrapper[4801]: I1206 04:20:43.922474 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d"} err="failed to get container status \"9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d\": rpc error: code = NotFound desc = could not find container \"9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d\": container with ID starting with 9896d01b2c9676c1b9dfb9cfae4638875b9187fd79e51a8ee3ea0c918ffb954d not found: ID does not exist" Dec 06 04:20:45 crc kubenswrapper[4801]: I1206 04:20:45.223826 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" path="/var/lib/kubelet/pods/c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888/volumes" Dec 06 04:20:49 crc kubenswrapper[4801]: I1206 04:20:49.216620 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:20:49 crc kubenswrapper[4801]: E1206 04:20:49.217436 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:21:01 crc kubenswrapper[4801]: I1206 04:21:01.213176 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:21:01 crc kubenswrapper[4801]: E1206 04:21:01.213896 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:21:08 crc kubenswrapper[4801]: I1206 04:21:08.038569 4801 patch_prober.go:28] interesting pod/controller-manager-58ddb96588-w4bb8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 04:21:08 crc kubenswrapper[4801]: I1206 04:21:08.046359 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" podUID="30de1f7b-1348-4f12-9952-f639cd0f3e2f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 04:21:08 crc kubenswrapper[4801]: I1206 04:21:08.055923 4801 patch_prober.go:28] interesting pod/controller-manager-58ddb96588-w4bb8 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 04:21:08 crc kubenswrapper[4801]: I1206 04:21:08.055999 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-58ddb96588-w4bb8" podUID="30de1f7b-1348-4f12-9952-f639cd0f3e2f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 04:21:16 crc kubenswrapper[4801]: I1206 04:21:16.212073 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:21:16 crc kubenswrapper[4801]: E1206 04:21:16.212856 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:21:27 crc kubenswrapper[4801]: I1206 04:21:27.218304 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:21:27 crc kubenswrapper[4801]: E1206 04:21:27.219057 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:21:36 crc kubenswrapper[4801]: I1206 04:21:36.736530 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output="command timed out" Dec 06 04:21:36 crc kubenswrapper[4801]: I1206 04:21:36.737178 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output="command timed out" Dec 06 04:21:40 crc kubenswrapper[4801]: I1206 04:21:40.211909 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:21:40 crc kubenswrapper[4801]: E1206 04:21:40.212570 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:21:53 crc kubenswrapper[4801]: I1206 04:21:53.212374 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:21:53 crc kubenswrapper[4801]: E1206 04:21:53.213049 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:21:56 crc kubenswrapper[4801]: I1206 04:21:56.737121 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output="command timed out" Dec 06 04:21:56 crc kubenswrapper[4801]: I1206 04:21:56.737176 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output="command timed out" Dec 06 04:22:05 crc kubenswrapper[4801]: I1206 04:22:05.213272 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:22:05 crc kubenswrapper[4801]: E1206 04:22:05.213950 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:22:16 crc kubenswrapper[4801]: I1206 04:22:16.212709 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:22:16 crc kubenswrapper[4801]: E1206 04:22:16.213527 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:22:31 crc kubenswrapper[4801]: I1206 04:22:31.212422 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:22:31 crc kubenswrapper[4801]: E1206 04:22:31.213459 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:22:43 crc kubenswrapper[4801]: I1206 04:22:43.212960 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:22:43 crc kubenswrapper[4801]: E1206 04:22:43.213934 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:22:54 crc kubenswrapper[4801]: I1206 04:22:54.213169 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:22:54 crc kubenswrapper[4801]: E1206 04:22:54.214086 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:23:09 crc kubenswrapper[4801]: I1206 04:23:09.213103 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:23:09 crc kubenswrapper[4801]: E1206 04:23:09.213723 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:23:22 crc kubenswrapper[4801]: I1206 04:23:22.213005 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:23:22 crc kubenswrapper[4801]: E1206 04:23:22.213788 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:23:37 crc kubenswrapper[4801]: I1206 04:23:37.218789 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:23:37 crc kubenswrapper[4801]: E1206 04:23:37.219700 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:23:51 crc kubenswrapper[4801]: I1206 04:23:51.212067 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:23:51 crc kubenswrapper[4801]: E1206 04:23:51.212983 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:24:05 crc kubenswrapper[4801]: I1206 04:24:05.216729 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:24:05 crc kubenswrapper[4801]: E1206 04:24:05.218602 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:24:17 crc kubenswrapper[4801]: I1206 04:24:17.219908 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:24:17 crc kubenswrapper[4801]: I1206 04:24:17.861276 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"303839e6e1cc700c6b247a4a32e89124fffc83674026b276741fa85c17015d0a"} Dec 06 04:26:41 crc kubenswrapper[4801]: I1206 04:26:41.169798 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:26:41 crc kubenswrapper[4801]: I1206 04:26:41.170425 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:27:11 crc kubenswrapper[4801]: I1206 04:27:11.169876 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:27:11 crc kubenswrapper[4801]: I1206 04:27:11.170547 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:27:41 crc kubenswrapper[4801]: I1206 04:27:41.169498 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:27:41 crc kubenswrapper[4801]: I1206 04:27:41.170411 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:27:41 crc kubenswrapper[4801]: I1206 04:27:41.170507 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:27:41 crc kubenswrapper[4801]: I1206 04:27:41.171942 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"303839e6e1cc700c6b247a4a32e89124fffc83674026b276741fa85c17015d0a"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:27:41 crc kubenswrapper[4801]: I1206 04:27:41.172092 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://303839e6e1cc700c6b247a4a32e89124fffc83674026b276741fa85c17015d0a" gracePeriod=600 Dec 06 04:27:42 crc kubenswrapper[4801]: I1206 04:27:42.711299 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="303839e6e1cc700c6b247a4a32e89124fffc83674026b276741fa85c17015d0a" exitCode=0 Dec 06 04:27:42 crc kubenswrapper[4801]: I1206 04:27:42.711376 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"303839e6e1cc700c6b247a4a32e89124fffc83674026b276741fa85c17015d0a"} Dec 06 04:27:42 crc kubenswrapper[4801]: I1206 04:27:42.711638 4801 scope.go:117] "RemoveContainer" containerID="a8e4873835bcf7dcc80123827d6cdf58d11616a64238bad7c290d3b179ece61c" Dec 06 04:27:43 crc kubenswrapper[4801]: I1206 04:27:43.724581 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164"} Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.367498 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k9hln"] Dec 06 04:27:56 crc kubenswrapper[4801]: E1206 04:27:56.368969 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="extract-utilities" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.368990 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="extract-utilities" Dec 06 04:27:56 crc kubenswrapper[4801]: E1206 04:27:56.369007 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="extract-utilities" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369015 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="extract-utilities" Dec 06 04:27:56 crc kubenswrapper[4801]: E1206 04:27:56.369041 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="extract-content" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369049 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="extract-content" Dec 06 04:27:56 crc kubenswrapper[4801]: E1206 04:27:56.369071 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="registry-server" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369099 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="registry-server" Dec 06 04:27:56 crc kubenswrapper[4801]: E1206 04:27:56.369118 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="registry-server" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369125 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="registry-server" Dec 06 04:27:56 crc kubenswrapper[4801]: E1206 04:27:56.369137 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="extract-content" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369144 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="extract-content" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369643 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0dcaa18-de0a-42c9-a0b1-3e7d15bc1888" containerName="registry-server" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.369664 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fa43eb-f0c7-4bc0-887c-a4ad76881169" containerName="registry-server" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.371698 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.380422 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9hln"] Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.482170 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-utilities\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.482358 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6dc7\" (UniqueName: \"kubernetes.io/projected/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-kube-api-access-q6dc7\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.482396 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-catalog-content\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.585203 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6dc7\" (UniqueName: \"kubernetes.io/projected/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-kube-api-access-q6dc7\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.585285 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-catalog-content\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.585378 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-utilities\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.585896 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-utilities\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.586034 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-catalog-content\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.608031 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6dc7\" (UniqueName: \"kubernetes.io/projected/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-kube-api-access-q6dc7\") pod \"redhat-marketplace-k9hln\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:56 crc kubenswrapper[4801]: I1206 04:27:56.713886 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:27:57 crc kubenswrapper[4801]: I1206 04:27:57.257837 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9hln"] Dec 06 04:27:57 crc kubenswrapper[4801]: I1206 04:27:57.858066 4801 generic.go:334] "Generic (PLEG): container finished" podID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerID="f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3" exitCode=0 Dec 06 04:27:57 crc kubenswrapper[4801]: I1206 04:27:57.858163 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9hln" event={"ID":"2fca415a-d1a2-4af0-b6c1-62a94ab116c8","Type":"ContainerDied","Data":"f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3"} Dec 06 04:27:57 crc kubenswrapper[4801]: I1206 04:27:57.858416 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9hln" event={"ID":"2fca415a-d1a2-4af0-b6c1-62a94ab116c8","Type":"ContainerStarted","Data":"2138e302968ee6bb60528b403016775e48c973b7fe479f80389b05fddf1ec21b"} Dec 06 04:27:57 crc kubenswrapper[4801]: I1206 04:27:57.860166 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 04:27:59 crc kubenswrapper[4801]: I1206 04:27:59.878274 4801 generic.go:334] "Generic (PLEG): container finished" podID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerID="555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae" exitCode=0 Dec 06 04:27:59 crc kubenswrapper[4801]: I1206 04:27:59.878335 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9hln" event={"ID":"2fca415a-d1a2-4af0-b6c1-62a94ab116c8","Type":"ContainerDied","Data":"555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae"} Dec 06 04:28:00 crc kubenswrapper[4801]: I1206 04:28:00.925183 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9hln" event={"ID":"2fca415a-d1a2-4af0-b6c1-62a94ab116c8","Type":"ContainerStarted","Data":"bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1"} Dec 06 04:28:00 crc kubenswrapper[4801]: I1206 04:28:00.980940 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k9hln" podStartSLOduration=2.433884905 podStartE2EDuration="4.980912543s" podCreationTimestamp="2025-12-06 04:27:56 +0000 UTC" firstStartedPulling="2025-12-06 04:27:57.859802502 +0000 UTC m=+4930.982410074" lastFinishedPulling="2025-12-06 04:28:00.40683014 +0000 UTC m=+4933.529437712" observedRunningTime="2025-12-06 04:28:00.949433716 +0000 UTC m=+4934.072041288" watchObservedRunningTime="2025-12-06 04:28:00.980912543 +0000 UTC m=+4934.103520115" Dec 06 04:28:06 crc kubenswrapper[4801]: I1206 04:28:06.716945 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:28:06 crc kubenswrapper[4801]: I1206 04:28:06.717442 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:28:06 crc kubenswrapper[4801]: I1206 04:28:06.770401 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:28:07 crc kubenswrapper[4801]: I1206 04:28:07.028000 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:28:07 crc kubenswrapper[4801]: I1206 04:28:07.071775 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9hln"] Dec 06 04:28:08 crc kubenswrapper[4801]: I1206 04:28:08.995064 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k9hln" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="registry-server" containerID="cri-o://bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1" gracePeriod=2 Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.491286 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.642573 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6dc7\" (UniqueName: \"kubernetes.io/projected/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-kube-api-access-q6dc7\") pod \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.642627 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-utilities\") pod \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.642822 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-catalog-content\") pod \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\" (UID: \"2fca415a-d1a2-4af0-b6c1-62a94ab116c8\") " Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.643568 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-utilities" (OuterVolumeSpecName: "utilities") pod "2fca415a-d1a2-4af0-b6c1-62a94ab116c8" (UID: "2fca415a-d1a2-4af0-b6c1-62a94ab116c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.654902 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-kube-api-access-q6dc7" (OuterVolumeSpecName: "kube-api-access-q6dc7") pod "2fca415a-d1a2-4af0-b6c1-62a94ab116c8" (UID: "2fca415a-d1a2-4af0-b6c1-62a94ab116c8"). InnerVolumeSpecName "kube-api-access-q6dc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.688525 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fca415a-d1a2-4af0-b6c1-62a94ab116c8" (UID: "2fca415a-d1a2-4af0-b6c1-62a94ab116c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.744807 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6dc7\" (UniqueName: \"kubernetes.io/projected/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-kube-api-access-q6dc7\") on node \"crc\" DevicePath \"\"" Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.744837 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:28:09 crc kubenswrapper[4801]: I1206 04:28:09.744846 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fca415a-d1a2-4af0-b6c1-62a94ab116c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.005410 4801 generic.go:334] "Generic (PLEG): container finished" podID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerID="bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1" exitCode=0 Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.005499 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9hln" event={"ID":"2fca415a-d1a2-4af0-b6c1-62a94ab116c8","Type":"ContainerDied","Data":"bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1"} Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.007299 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k9hln" event={"ID":"2fca415a-d1a2-4af0-b6c1-62a94ab116c8","Type":"ContainerDied","Data":"2138e302968ee6bb60528b403016775e48c973b7fe479f80389b05fddf1ec21b"} Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.005535 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k9hln" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.007360 4801 scope.go:117] "RemoveContainer" containerID="bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.030950 4801 scope.go:117] "RemoveContainer" containerID="555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.070476 4801 scope.go:117] "RemoveContainer" containerID="f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.077401 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9hln"] Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.103451 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k9hln"] Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.130036 4801 scope.go:117] "RemoveContainer" containerID="bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1" Dec 06 04:28:10 crc kubenswrapper[4801]: E1206 04:28:10.137072 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1\": container with ID starting with bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1 not found: ID does not exist" containerID="bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.137128 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1"} err="failed to get container status \"bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1\": rpc error: code = NotFound desc = could not find container \"bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1\": container with ID starting with bc5972d75b56e2b6b80479a803c48ffe1c1227556dd88178125f5e9d034946d1 not found: ID does not exist" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.137161 4801 scope.go:117] "RemoveContainer" containerID="555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae" Dec 06 04:28:10 crc kubenswrapper[4801]: E1206 04:28:10.137622 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae\": container with ID starting with 555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae not found: ID does not exist" containerID="555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.137658 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae"} err="failed to get container status \"555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae\": rpc error: code = NotFound desc = could not find container \"555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae\": container with ID starting with 555ede5c5d38be76cbcd2a30591f232b7d57335bfb2351d863f0fbbb5584aeae not found: ID does not exist" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.137678 4801 scope.go:117] "RemoveContainer" containerID="f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3" Dec 06 04:28:10 crc kubenswrapper[4801]: E1206 04:28:10.138191 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3\": container with ID starting with f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3 not found: ID does not exist" containerID="f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3" Dec 06 04:28:10 crc kubenswrapper[4801]: I1206 04:28:10.138223 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3"} err="failed to get container status \"f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3\": rpc error: code = NotFound desc = could not find container \"f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3\": container with ID starting with f69b7b6d3999112d7a95b6d6de8ffb0b3b82ccc2af28a6db4414567cacc511a3 not found: ID does not exist" Dec 06 04:28:11 crc kubenswrapper[4801]: I1206 04:28:11.235645 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" path="/var/lib/kubelet/pods/2fca415a-d1a2-4af0-b6c1-62a94ab116c8/volumes" Dec 06 04:28:12 crc kubenswrapper[4801]: I1206 04:28:12.960281 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6gqkb"] Dec 06 04:28:12 crc kubenswrapper[4801]: E1206 04:28:12.961624 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="extract-utilities" Dec 06 04:28:12 crc kubenswrapper[4801]: I1206 04:28:12.961640 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="extract-utilities" Dec 06 04:28:12 crc kubenswrapper[4801]: E1206 04:28:12.961654 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="registry-server" Dec 06 04:28:12 crc kubenswrapper[4801]: I1206 04:28:12.961661 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="registry-server" Dec 06 04:28:12 crc kubenswrapper[4801]: E1206 04:28:12.961678 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="extract-content" Dec 06 04:28:12 crc kubenswrapper[4801]: I1206 04:28:12.961687 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="extract-content" Dec 06 04:28:12 crc kubenswrapper[4801]: I1206 04:28:12.961898 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fca415a-d1a2-4af0-b6c1-62a94ab116c8" containerName="registry-server" Dec 06 04:28:12 crc kubenswrapper[4801]: I1206 04:28:12.963245 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.001090 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gqkb"] Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.013464 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-catalog-content\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.013605 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-utilities\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.013643 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95lpd\" (UniqueName: \"kubernetes.io/projected/78b15557-184a-48a9-9d82-0061c3391fed-kube-api-access-95lpd\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.115476 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-catalog-content\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.115556 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-utilities\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.115614 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95lpd\" (UniqueName: \"kubernetes.io/projected/78b15557-184a-48a9-9d82-0061c3391fed-kube-api-access-95lpd\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.116382 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-utilities\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.116382 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-catalog-content\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.155576 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95lpd\" (UniqueName: \"kubernetes.io/projected/78b15557-184a-48a9-9d82-0061c3391fed-kube-api-access-95lpd\") pod \"redhat-operators-6gqkb\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.298969 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:13 crc kubenswrapper[4801]: I1206 04:28:13.893611 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6gqkb"] Dec 06 04:28:14 crc kubenswrapper[4801]: I1206 04:28:14.046412 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerStarted","Data":"c98bff2b38403be1eaa49e29d244a380b2e52e661ad77ac82f20b7bf2199d00b"} Dec 06 04:28:15 crc kubenswrapper[4801]: I1206 04:28:15.056567 4801 generic.go:334] "Generic (PLEG): container finished" podID="78b15557-184a-48a9-9d82-0061c3391fed" containerID="cf5b9ba05062a396c5a51c1f61df5fcfce026874d8da1dcc82de2f7d2741a176" exitCode=0 Dec 06 04:28:15 crc kubenswrapper[4801]: I1206 04:28:15.056674 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerDied","Data":"cf5b9ba05062a396c5a51c1f61df5fcfce026874d8da1dcc82de2f7d2741a176"} Dec 06 04:28:16 crc kubenswrapper[4801]: I1206 04:28:16.068090 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerStarted","Data":"447f96745210e8923569f31702478d73a8d84b8f6f6edaa059f1fe3bdaa76652"} Dec 06 04:28:19 crc kubenswrapper[4801]: I1206 04:28:19.095682 4801 generic.go:334] "Generic (PLEG): container finished" podID="78b15557-184a-48a9-9d82-0061c3391fed" containerID="447f96745210e8923569f31702478d73a8d84b8f6f6edaa059f1fe3bdaa76652" exitCode=0 Dec 06 04:28:19 crc kubenswrapper[4801]: I1206 04:28:19.095799 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerDied","Data":"447f96745210e8923569f31702478d73a8d84b8f6f6edaa059f1fe3bdaa76652"} Dec 06 04:28:21 crc kubenswrapper[4801]: I1206 04:28:21.114237 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerStarted","Data":"40ddec0c2adef8aa7208573f83bfdd3375036065e312e02d28c5fc4cec310600"} Dec 06 04:28:21 crc kubenswrapper[4801]: I1206 04:28:21.138944 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6gqkb" podStartSLOduration=4.177254386 podStartE2EDuration="9.138925511s" podCreationTimestamp="2025-12-06 04:28:12 +0000 UTC" firstStartedPulling="2025-12-06 04:28:15.066282092 +0000 UTC m=+4948.188889674" lastFinishedPulling="2025-12-06 04:28:20.027953217 +0000 UTC m=+4953.150560799" observedRunningTime="2025-12-06 04:28:21.133425813 +0000 UTC m=+4954.256033395" watchObservedRunningTime="2025-12-06 04:28:21.138925511 +0000 UTC m=+4954.261533073" Dec 06 04:28:23 crc kubenswrapper[4801]: I1206 04:28:23.299917 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:23 crc kubenswrapper[4801]: I1206 04:28:23.300558 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:24 crc kubenswrapper[4801]: I1206 04:28:24.351144 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6gqkb" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="registry-server" probeResult="failure" output=< Dec 06 04:28:24 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 04:28:24 crc kubenswrapper[4801]: > Dec 06 04:28:33 crc kubenswrapper[4801]: I1206 04:28:33.357060 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:33 crc kubenswrapper[4801]: I1206 04:28:33.410609 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:33 crc kubenswrapper[4801]: I1206 04:28:33.598325 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gqkb"] Dec 06 04:28:35 crc kubenswrapper[4801]: I1206 04:28:35.252336 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6gqkb" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="registry-server" containerID="cri-o://40ddec0c2adef8aa7208573f83bfdd3375036065e312e02d28c5fc4cec310600" gracePeriod=2 Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.262043 4801 generic.go:334] "Generic (PLEG): container finished" podID="78b15557-184a-48a9-9d82-0061c3391fed" containerID="40ddec0c2adef8aa7208573f83bfdd3375036065e312e02d28c5fc4cec310600" exitCode=0 Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.262698 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerDied","Data":"40ddec0c2adef8aa7208573f83bfdd3375036065e312e02d28c5fc4cec310600"} Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.346678 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.446034 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-utilities\") pod \"78b15557-184a-48a9-9d82-0061c3391fed\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.446384 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95lpd\" (UniqueName: \"kubernetes.io/projected/78b15557-184a-48a9-9d82-0061c3391fed-kube-api-access-95lpd\") pod \"78b15557-184a-48a9-9d82-0061c3391fed\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.446631 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-catalog-content\") pod \"78b15557-184a-48a9-9d82-0061c3391fed\" (UID: \"78b15557-184a-48a9-9d82-0061c3391fed\") " Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.446978 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-utilities" (OuterVolumeSpecName: "utilities") pod "78b15557-184a-48a9-9d82-0061c3391fed" (UID: "78b15557-184a-48a9-9d82-0061c3391fed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.447474 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.471857 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78b15557-184a-48a9-9d82-0061c3391fed-kube-api-access-95lpd" (OuterVolumeSpecName: "kube-api-access-95lpd") pod "78b15557-184a-48a9-9d82-0061c3391fed" (UID: "78b15557-184a-48a9-9d82-0061c3391fed"). InnerVolumeSpecName "kube-api-access-95lpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.549208 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95lpd\" (UniqueName: \"kubernetes.io/projected/78b15557-184a-48a9-9d82-0061c3391fed-kube-api-access-95lpd\") on node \"crc\" DevicePath \"\"" Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.555463 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78b15557-184a-48a9-9d82-0061c3391fed" (UID: "78b15557-184a-48a9-9d82-0061c3391fed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:28:36 crc kubenswrapper[4801]: I1206 04:28:36.650690 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78b15557-184a-48a9-9d82-0061c3391fed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.272923 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6gqkb" event={"ID":"78b15557-184a-48a9-9d82-0061c3391fed","Type":"ContainerDied","Data":"c98bff2b38403be1eaa49e29d244a380b2e52e661ad77ac82f20b7bf2199d00b"} Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.273222 4801 scope.go:117] "RemoveContainer" containerID="40ddec0c2adef8aa7208573f83bfdd3375036065e312e02d28c5fc4cec310600" Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.272965 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6gqkb" Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.293826 4801 scope.go:117] "RemoveContainer" containerID="447f96745210e8923569f31702478d73a8d84b8f6f6edaa059f1fe3bdaa76652" Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.310319 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6gqkb"] Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.317584 4801 scope.go:117] "RemoveContainer" containerID="cf5b9ba05062a396c5a51c1f61df5fcfce026874d8da1dcc82de2f7d2741a176" Dec 06 04:28:37 crc kubenswrapper[4801]: I1206 04:28:37.321533 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6gqkb"] Dec 06 04:28:39 crc kubenswrapper[4801]: I1206 04:28:39.223454 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78b15557-184a-48a9-9d82-0061c3391fed" path="/var/lib/kubelet/pods/78b15557-184a-48a9-9d82-0061c3391fed/volumes" Dec 06 04:29:16 crc kubenswrapper[4801]: I1206 04:29:16.739809 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output="command timed out" Dec 06 04:29:16 crc kubenswrapper[4801]: I1206 04:29:16.740615 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="c85c66a1-6bad-499d-8a59-75020d456cd7" containerName="galera" probeResult="failure" output="command timed out" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.165804 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj"] Dec 06 04:30:00 crc kubenswrapper[4801]: E1206 04:30:00.167980 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="extract-utilities" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.168111 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="extract-utilities" Dec 06 04:30:00 crc kubenswrapper[4801]: E1206 04:30:00.168207 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="extract-content" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.168364 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="extract-content" Dec 06 04:30:00 crc kubenswrapper[4801]: E1206 04:30:00.168453 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="registry-server" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.168523 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="registry-server" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.168802 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="78b15557-184a-48a9-9d82-0061c3391fed" containerName="registry-server" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.169998 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.173773 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.174200 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.178649 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj"] Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.296216 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57d22a4-5e2b-46da-b45e-3a393af8f41b-config-volume\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.296303 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f57d22a4-5e2b-46da-b45e-3a393af8f41b-secret-volume\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.296347 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg86m\" (UniqueName: \"kubernetes.io/projected/f57d22a4-5e2b-46da-b45e-3a393af8f41b-kube-api-access-pg86m\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.397990 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f57d22a4-5e2b-46da-b45e-3a393af8f41b-secret-volume\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.398067 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg86m\" (UniqueName: \"kubernetes.io/projected/f57d22a4-5e2b-46da-b45e-3a393af8f41b-kube-api-access-pg86m\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.398211 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57d22a4-5e2b-46da-b45e-3a393af8f41b-config-volume\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.399133 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57d22a4-5e2b-46da-b45e-3a393af8f41b-config-volume\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.404323 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f57d22a4-5e2b-46da-b45e-3a393af8f41b-secret-volume\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.416811 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg86m\" (UniqueName: \"kubernetes.io/projected/f57d22a4-5e2b-46da-b45e-3a393af8f41b-kube-api-access-pg86m\") pod \"collect-profiles-29416590-nlgfj\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.492529 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:00 crc kubenswrapper[4801]: I1206 04:30:00.940197 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj"] Dec 06 04:30:01 crc kubenswrapper[4801]: I1206 04:30:01.129245 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" event={"ID":"f57d22a4-5e2b-46da-b45e-3a393af8f41b","Type":"ContainerStarted","Data":"eea8095c20fe89e54365f6b7bdc4cf7d477ea3a76bcf73d05376340d9455fb64"} Dec 06 04:30:02 crc kubenswrapper[4801]: I1206 04:30:02.138056 4801 generic.go:334] "Generic (PLEG): container finished" podID="f57d22a4-5e2b-46da-b45e-3a393af8f41b" containerID="8b751792f95e4540ee2a5c8a0324ddf491cef6951eec213d0912ae90f15e2af4" exitCode=0 Dec 06 04:30:02 crc kubenswrapper[4801]: I1206 04:30:02.138149 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" event={"ID":"f57d22a4-5e2b-46da-b45e-3a393af8f41b","Type":"ContainerDied","Data":"8b751792f95e4540ee2a5c8a0324ddf491cef6951eec213d0912ae90f15e2af4"} Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.562016 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.661392 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg86m\" (UniqueName: \"kubernetes.io/projected/f57d22a4-5e2b-46da-b45e-3a393af8f41b-kube-api-access-pg86m\") pod \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.661718 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f57d22a4-5e2b-46da-b45e-3a393af8f41b-secret-volume\") pod \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.661879 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57d22a4-5e2b-46da-b45e-3a393af8f41b-config-volume\") pod \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\" (UID: \"f57d22a4-5e2b-46da-b45e-3a393af8f41b\") " Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.662543 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f57d22a4-5e2b-46da-b45e-3a393af8f41b-config-volume" (OuterVolumeSpecName: "config-volume") pod "f57d22a4-5e2b-46da-b45e-3a393af8f41b" (UID: "f57d22a4-5e2b-46da-b45e-3a393af8f41b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.673920 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f57d22a4-5e2b-46da-b45e-3a393af8f41b-kube-api-access-pg86m" (OuterVolumeSpecName: "kube-api-access-pg86m") pod "f57d22a4-5e2b-46da-b45e-3a393af8f41b" (UID: "f57d22a4-5e2b-46da-b45e-3a393af8f41b"). InnerVolumeSpecName "kube-api-access-pg86m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.683946 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f57d22a4-5e2b-46da-b45e-3a393af8f41b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f57d22a4-5e2b-46da-b45e-3a393af8f41b" (UID: "f57d22a4-5e2b-46da-b45e-3a393af8f41b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.765135 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f57d22a4-5e2b-46da-b45e-3a393af8f41b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.765167 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg86m\" (UniqueName: \"kubernetes.io/projected/f57d22a4-5e2b-46da-b45e-3a393af8f41b-kube-api-access-pg86m\") on node \"crc\" DevicePath \"\"" Dec 06 04:30:03 crc kubenswrapper[4801]: I1206 04:30:03.765178 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f57d22a4-5e2b-46da-b45e-3a393af8f41b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:30:04 crc kubenswrapper[4801]: I1206 04:30:04.155777 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" event={"ID":"f57d22a4-5e2b-46da-b45e-3a393af8f41b","Type":"ContainerDied","Data":"eea8095c20fe89e54365f6b7bdc4cf7d477ea3a76bcf73d05376340d9455fb64"} Dec 06 04:30:04 crc kubenswrapper[4801]: I1206 04:30:04.155818 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eea8095c20fe89e54365f6b7bdc4cf7d477ea3a76bcf73d05376340d9455fb64" Dec 06 04:30:04 crc kubenswrapper[4801]: I1206 04:30:04.155893 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416590-nlgfj" Dec 06 04:30:04 crc kubenswrapper[4801]: I1206 04:30:04.636004 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb"] Dec 06 04:30:04 crc kubenswrapper[4801]: I1206 04:30:04.644841 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416545-4k8cb"] Dec 06 04:30:05 crc kubenswrapper[4801]: I1206 04:30:05.224104 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c138298f-7ed9-4198-bb9c-f3e37aac4834" path="/var/lib/kubelet/pods/c138298f-7ed9-4198-bb9c-f3e37aac4834/volumes" Dec 06 04:30:11 crc kubenswrapper[4801]: I1206 04:30:11.170203 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:30:11 crc kubenswrapper[4801]: I1206 04:30:11.172033 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:30:22 crc kubenswrapper[4801]: I1206 04:30:22.829636 4801 scope.go:117] "RemoveContainer" containerID="b6566ace9484d254b36f2291346d1cc13cad296d352fdc301bdf352a5edb657c" Dec 06 04:30:41 crc kubenswrapper[4801]: I1206 04:30:41.169820 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:30:41 crc kubenswrapper[4801]: I1206 04:30:41.170551 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.169629 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.170225 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.170275 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.171116 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.171186 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" gracePeriod=600 Dec 06 04:31:11 crc kubenswrapper[4801]: E1206 04:31:11.294959 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.934684 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" exitCode=0 Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.934884 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164"} Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.935125 4801 scope.go:117] "RemoveContainer" containerID="303839e6e1cc700c6b247a4a32e89124fffc83674026b276741fa85c17015d0a" Dec 06 04:31:11 crc kubenswrapper[4801]: I1206 04:31:11.935882 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:31:11 crc kubenswrapper[4801]: E1206 04:31:11.936211 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:31:25 crc kubenswrapper[4801]: I1206 04:31:25.212035 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:31:25 crc kubenswrapper[4801]: E1206 04:31:25.213885 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:31:37 crc kubenswrapper[4801]: I1206 04:31:37.227631 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:31:37 crc kubenswrapper[4801]: E1206 04:31:37.228422 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:31:51 crc kubenswrapper[4801]: I1206 04:31:51.212567 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:31:51 crc kubenswrapper[4801]: E1206 04:31:51.213597 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:32:02 crc kubenswrapper[4801]: I1206 04:32:02.213087 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:32:02 crc kubenswrapper[4801]: E1206 04:32:02.214476 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:32:15 crc kubenswrapper[4801]: I1206 04:32:15.212667 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:32:15 crc kubenswrapper[4801]: E1206 04:32:15.213422 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:32:26 crc kubenswrapper[4801]: I1206 04:32:26.213144 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:32:26 crc kubenswrapper[4801]: E1206 04:32:26.214043 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:32:41 crc kubenswrapper[4801]: I1206 04:32:41.216486 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:32:41 crc kubenswrapper[4801]: E1206 04:32:41.217249 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:32:56 crc kubenswrapper[4801]: I1206 04:32:56.212941 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:32:56 crc kubenswrapper[4801]: E1206 04:32:56.213872 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:33:07 crc kubenswrapper[4801]: I1206 04:33:07.218805 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:33:07 crc kubenswrapper[4801]: E1206 04:33:07.219581 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:33:22 crc kubenswrapper[4801]: I1206 04:33:22.212583 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:33:22 crc kubenswrapper[4801]: E1206 04:33:22.213559 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:33:34 crc kubenswrapper[4801]: I1206 04:33:34.213838 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:33:34 crc kubenswrapper[4801]: E1206 04:33:34.214904 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:33:47 crc kubenswrapper[4801]: I1206 04:33:47.221576 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:33:47 crc kubenswrapper[4801]: E1206 04:33:47.222562 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:34:01 crc kubenswrapper[4801]: I1206 04:34:01.179021 4801 patch_prober.go:28] interesting pod/oauth-openshift-69b55d54f6-8lwhp container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 04:34:01 crc kubenswrapper[4801]: I1206 04:34:01.179880 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-69b55d54f6-8lwhp" podUID="87994278-100d-4c62-802e-6635aa1be16d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 04:34:02 crc kubenswrapper[4801]: I1206 04:34:02.213221 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:34:02 crc kubenswrapper[4801]: E1206 04:34:02.215131 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:34:17 crc kubenswrapper[4801]: I1206 04:34:17.219569 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:34:17 crc kubenswrapper[4801]: E1206 04:34:17.220410 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.237671 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kwscb"] Dec 06 04:34:21 crc kubenswrapper[4801]: E1206 04:34:21.238772 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f57d22a4-5e2b-46da-b45e-3a393af8f41b" containerName="collect-profiles" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.238792 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f57d22a4-5e2b-46da-b45e-3a393af8f41b" containerName="collect-profiles" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.239051 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f57d22a4-5e2b-46da-b45e-3a393af8f41b" containerName="collect-profiles" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.241572 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.267691 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwscb"] Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.310465 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-catalog-content\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.310628 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcgq7\" (UniqueName: \"kubernetes.io/projected/a869978a-8276-4723-a7a2-d11e4e0d333d-kube-api-access-gcgq7\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.310703 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-utilities\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.412560 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-catalog-content\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.412683 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcgq7\" (UniqueName: \"kubernetes.io/projected/a869978a-8276-4723-a7a2-d11e4e0d333d-kube-api-access-gcgq7\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.412750 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-utilities\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.413231 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-utilities\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.413251 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-catalog-content\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.434138 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8xvjm"] Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.435964 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.475357 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xvjm"] Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.515253 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-utilities\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.515310 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b27tt\" (UniqueName: \"kubernetes.io/projected/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-kube-api-access-b27tt\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.515352 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-catalog-content\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.552534 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcgq7\" (UniqueName: \"kubernetes.io/projected/a869978a-8276-4723-a7a2-d11e4e0d333d-kube-api-access-gcgq7\") pod \"community-operators-kwscb\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.568896 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.623124 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-utilities\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.623182 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b27tt\" (UniqueName: \"kubernetes.io/projected/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-kube-api-access-b27tt\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.623224 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-catalog-content\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.623807 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-catalog-content\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.624022 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-utilities\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.663411 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b27tt\" (UniqueName: \"kubernetes.io/projected/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-kube-api-access-b27tt\") pod \"certified-operators-8xvjm\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:21 crc kubenswrapper[4801]: I1206 04:34:21.769660 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.267540 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kwscb"] Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.442151 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8xvjm"] Dec 06 04:34:22 crc kubenswrapper[4801]: W1206 04:34:22.454122 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod761e5d2e_bc1c_440f_b97e_3734a0f2ac7b.slice/crio-fa2e46151cc20d608dd73404843bac46ed10aac4ab87069adcda6ebf956d6479 WatchSource:0}: Error finding container fa2e46151cc20d608dd73404843bac46ed10aac4ab87069adcda6ebf956d6479: Status 404 returned error can't find the container with id fa2e46151cc20d608dd73404843bac46ed10aac4ab87069adcda6ebf956d6479 Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.849037 4801 generic.go:334] "Generic (PLEG): container finished" podID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerID="a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2" exitCode=0 Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.849079 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerDied","Data":"a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2"} Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.849357 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerStarted","Data":"fa2e46151cc20d608dd73404843bac46ed10aac4ab87069adcda6ebf956d6479"} Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.851412 4801 generic.go:334] "Generic (PLEG): container finished" podID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerID="b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5" exitCode=0 Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.851442 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerDied","Data":"b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5"} Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.851457 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerStarted","Data":"c00e4b2ef2681f717ab8819d9c4f84d88b4e889e59c9dd4854ed96689b8ae33d"} Dec 06 04:34:22 crc kubenswrapper[4801]: I1206 04:34:22.851467 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 04:34:23 crc kubenswrapper[4801]: I1206 04:34:23.862038 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerStarted","Data":"de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290"} Dec 06 04:34:23 crc kubenswrapper[4801]: I1206 04:34:23.866397 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerStarted","Data":"d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53"} Dec 06 04:34:25 crc kubenswrapper[4801]: I1206 04:34:25.897305 4801 generic.go:334] "Generic (PLEG): container finished" podID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerID="de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290" exitCode=0 Dec 06 04:34:25 crc kubenswrapper[4801]: I1206 04:34:25.897894 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerDied","Data":"de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290"} Dec 06 04:34:25 crc kubenswrapper[4801]: I1206 04:34:25.915431 4801 generic.go:334] "Generic (PLEG): container finished" podID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerID="d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53" exitCode=0 Dec 06 04:34:25 crc kubenswrapper[4801]: I1206 04:34:25.915478 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerDied","Data":"d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53"} Dec 06 04:34:26 crc kubenswrapper[4801]: I1206 04:34:26.926725 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerStarted","Data":"b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b"} Dec 06 04:34:27 crc kubenswrapper[4801]: I1206 04:34:27.937646 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerStarted","Data":"426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43"} Dec 06 04:34:27 crc kubenswrapper[4801]: I1206 04:34:27.974007 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kwscb" podStartSLOduration=3.5158271169999997 podStartE2EDuration="6.973976351s" podCreationTimestamp="2025-12-06 04:34:21 +0000 UTC" firstStartedPulling="2025-12-06 04:34:22.852959007 +0000 UTC m=+5315.975566579" lastFinishedPulling="2025-12-06 04:34:26.311108241 +0000 UTC m=+5319.433715813" observedRunningTime="2025-12-06 04:34:27.964535407 +0000 UTC m=+5321.087142989" watchObservedRunningTime="2025-12-06 04:34:27.973976351 +0000 UTC m=+5321.096583943" Dec 06 04:34:28 crc kubenswrapper[4801]: I1206 04:34:28.213020 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:34:28 crc kubenswrapper[4801]: E1206 04:34:28.213656 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.569997 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.570593 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.626653 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.651889 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8xvjm" podStartSLOduration=6.543185565 podStartE2EDuration="10.651870799s" podCreationTimestamp="2025-12-06 04:34:21 +0000 UTC" firstStartedPulling="2025-12-06 04:34:22.851250072 +0000 UTC m=+5315.973857644" lastFinishedPulling="2025-12-06 04:34:26.959935306 +0000 UTC m=+5320.082542878" observedRunningTime="2025-12-06 04:34:27.984242458 +0000 UTC m=+5321.106850060" watchObservedRunningTime="2025-12-06 04:34:31.651870799 +0000 UTC m=+5324.774478371" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.771102 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.771158 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:31 crc kubenswrapper[4801]: I1206 04:34:31.827602 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:32 crc kubenswrapper[4801]: I1206 04:34:32.019774 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:33 crc kubenswrapper[4801]: I1206 04:34:33.621212 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwscb"] Dec 06 04:34:33 crc kubenswrapper[4801]: I1206 04:34:33.992710 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kwscb" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="registry-server" containerID="cri-o://b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b" gracePeriod=2 Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.477657 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.628692 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-catalog-content\") pod \"a869978a-8276-4723-a7a2-d11e4e0d333d\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.628750 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-utilities\") pod \"a869978a-8276-4723-a7a2-d11e4e0d333d\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.628902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcgq7\" (UniqueName: \"kubernetes.io/projected/a869978a-8276-4723-a7a2-d11e4e0d333d-kube-api-access-gcgq7\") pod \"a869978a-8276-4723-a7a2-d11e4e0d333d\" (UID: \"a869978a-8276-4723-a7a2-d11e4e0d333d\") " Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.629706 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-utilities" (OuterVolumeSpecName: "utilities") pod "a869978a-8276-4723-a7a2-d11e4e0d333d" (UID: "a869978a-8276-4723-a7a2-d11e4e0d333d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.650980 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a869978a-8276-4723-a7a2-d11e4e0d333d-kube-api-access-gcgq7" (OuterVolumeSpecName: "kube-api-access-gcgq7") pod "a869978a-8276-4723-a7a2-d11e4e0d333d" (UID: "a869978a-8276-4723-a7a2-d11e4e0d333d"). InnerVolumeSpecName "kube-api-access-gcgq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.731351 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:34:34 crc kubenswrapper[4801]: I1206 04:34:34.731389 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcgq7\" (UniqueName: \"kubernetes.io/projected/a869978a-8276-4723-a7a2-d11e4e0d333d-kube-api-access-gcgq7\") on node \"crc\" DevicePath \"\"" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.003121 4801 generic.go:334] "Generic (PLEG): container finished" podID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerID="b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b" exitCode=0 Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.003160 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerDied","Data":"b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b"} Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.003184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kwscb" event={"ID":"a869978a-8276-4723-a7a2-d11e4e0d333d","Type":"ContainerDied","Data":"c00e4b2ef2681f717ab8819d9c4f84d88b4e889e59c9dd4854ed96689b8ae33d"} Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.003203 4801 scope.go:117] "RemoveContainer" containerID="b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.003281 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kwscb" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.024958 4801 scope.go:117] "RemoveContainer" containerID="de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.047587 4801 scope.go:117] "RemoveContainer" containerID="b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.094690 4801 scope.go:117] "RemoveContainer" containerID="b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b" Dec 06 04:34:35 crc kubenswrapper[4801]: E1206 04:34:35.098309 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b\": container with ID starting with b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b not found: ID does not exist" containerID="b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.098433 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b"} err="failed to get container status \"b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b\": rpc error: code = NotFound desc = could not find container \"b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b\": container with ID starting with b2cd5938546ef98d3dc0a218f5e9d4a43e0131ba8b0cec2a5ec96825ddd6575b not found: ID does not exist" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.098542 4801 scope.go:117] "RemoveContainer" containerID="de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290" Dec 06 04:34:35 crc kubenswrapper[4801]: E1206 04:34:35.098894 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290\": container with ID starting with de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290 not found: ID does not exist" containerID="de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.098941 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290"} err="failed to get container status \"de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290\": rpc error: code = NotFound desc = could not find container \"de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290\": container with ID starting with de89875c0c1b52436cd2a6208f4a0a7e390fe7f74e1d6785a3e6a8ac2ee0b290 not found: ID does not exist" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.098976 4801 scope.go:117] "RemoveContainer" containerID="b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5" Dec 06 04:34:35 crc kubenswrapper[4801]: E1206 04:34:35.099378 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5\": container with ID starting with b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5 not found: ID does not exist" containerID="b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.099425 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5"} err="failed to get container status \"b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5\": rpc error: code = NotFound desc = could not find container \"b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5\": container with ID starting with b6f0c7a78cccd1095a2373f95fdff4b7abf1e8036fd390b77fa5db6c4faaeec5 not found: ID does not exist" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.108471 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a869978a-8276-4723-a7a2-d11e4e0d333d" (UID: "a869978a-8276-4723-a7a2-d11e4e0d333d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.138622 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a869978a-8276-4723-a7a2-d11e4e0d333d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.325358 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kwscb"] Dec 06 04:34:35 crc kubenswrapper[4801]: I1206 04:34:35.333399 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kwscb"] Dec 06 04:34:37 crc kubenswrapper[4801]: I1206 04:34:37.223378 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" path="/var/lib/kubelet/pods/a869978a-8276-4723-a7a2-d11e4e0d333d/volumes" Dec 06 04:34:39 crc kubenswrapper[4801]: I1206 04:34:39.212384 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:34:39 crc kubenswrapper[4801]: E1206 04:34:39.213083 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:34:42 crc kubenswrapper[4801]: I1206 04:34:42.288445 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:42 crc kubenswrapper[4801]: I1206 04:34:42.345605 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xvjm"] Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.069636 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8xvjm" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="registry-server" containerID="cri-o://426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43" gracePeriod=2 Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.568832 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.718873 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-catalog-content\") pod \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.718965 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-utilities\") pod \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.719021 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b27tt\" (UniqueName: \"kubernetes.io/projected/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-kube-api-access-b27tt\") pod \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\" (UID: \"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b\") " Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.719964 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-utilities" (OuterVolumeSpecName: "utilities") pod "761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" (UID: "761e5d2e-bc1c-440f-b97e-3734a0f2ac7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.730900 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-kube-api-access-b27tt" (OuterVolumeSpecName: "kube-api-access-b27tt") pod "761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" (UID: "761e5d2e-bc1c-440f-b97e-3734a0f2ac7b"). InnerVolumeSpecName "kube-api-access-b27tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.765359 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" (UID: "761e5d2e-bc1c-440f-b97e-3734a0f2ac7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.821936 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.821978 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:34:43 crc kubenswrapper[4801]: I1206 04:34:43.821992 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b27tt\" (UniqueName: \"kubernetes.io/projected/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b-kube-api-access-b27tt\") on node \"crc\" DevicePath \"\"" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.080886 4801 generic.go:334] "Generic (PLEG): container finished" podID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerID="426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43" exitCode=0 Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.080947 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerDied","Data":"426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43"} Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.081003 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8xvjm" event={"ID":"761e5d2e-bc1c-440f-b97e-3734a0f2ac7b","Type":"ContainerDied","Data":"fa2e46151cc20d608dd73404843bac46ed10aac4ab87069adcda6ebf956d6479"} Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.080962 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8xvjm" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.081021 4801 scope.go:117] "RemoveContainer" containerID="426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.112005 4801 scope.go:117] "RemoveContainer" containerID="d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.147917 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8xvjm"] Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.156605 4801 scope.go:117] "RemoveContainer" containerID="a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.168204 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8xvjm"] Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.186157 4801 scope.go:117] "RemoveContainer" containerID="426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43" Dec 06 04:34:44 crc kubenswrapper[4801]: E1206 04:34:44.188888 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43\": container with ID starting with 426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43 not found: ID does not exist" containerID="426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.188925 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43"} err="failed to get container status \"426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43\": rpc error: code = NotFound desc = could not find container \"426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43\": container with ID starting with 426446fd413435e834a34c730ad42de0bbf520372b2b4c9361af0678c8006a43 not found: ID does not exist" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.188945 4801 scope.go:117] "RemoveContainer" containerID="d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53" Dec 06 04:34:44 crc kubenswrapper[4801]: E1206 04:34:44.189203 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53\": container with ID starting with d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53 not found: ID does not exist" containerID="d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.189229 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53"} err="failed to get container status \"d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53\": rpc error: code = NotFound desc = could not find container \"d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53\": container with ID starting with d66dca09631a9865e2bcdd5771704444537d86ca557a0b650b5b506202a98b53 not found: ID does not exist" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.189244 4801 scope.go:117] "RemoveContainer" containerID="a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2" Dec 06 04:34:44 crc kubenswrapper[4801]: E1206 04:34:44.189581 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2\": container with ID starting with a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2 not found: ID does not exist" containerID="a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2" Dec 06 04:34:44 crc kubenswrapper[4801]: I1206 04:34:44.189603 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2"} err="failed to get container status \"a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2\": rpc error: code = NotFound desc = could not find container \"a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2\": container with ID starting with a83ce3db43f94d121df18a1b47afd4e194943541434185ce6ec354f617503bb2 not found: ID does not exist" Dec 06 04:34:45 crc kubenswrapper[4801]: I1206 04:34:45.222520 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" path="/var/lib/kubelet/pods/761e5d2e-bc1c-440f-b97e-3734a0f2ac7b/volumes" Dec 06 04:34:54 crc kubenswrapper[4801]: I1206 04:34:54.211895 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:34:54 crc kubenswrapper[4801]: E1206 04:34:54.212631 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:35:08 crc kubenswrapper[4801]: I1206 04:35:08.212617 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:35:08 crc kubenswrapper[4801]: E1206 04:35:08.214284 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:35:19 crc kubenswrapper[4801]: I1206 04:35:19.212678 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:35:19 crc kubenswrapper[4801]: E1206 04:35:19.213498 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:35:30 crc kubenswrapper[4801]: I1206 04:35:30.213033 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:35:30 crc kubenswrapper[4801]: E1206 04:35:30.213879 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:35:44 crc kubenswrapper[4801]: I1206 04:35:44.212522 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:35:44 crc kubenswrapper[4801]: E1206 04:35:44.213193 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:35:54 crc kubenswrapper[4801]: I1206 04:35:54.338506 4801 generic.go:334] "Generic (PLEG): container finished" podID="f38b08ba-582a-45d7-a085-ccfa93f1a805" containerID="7366fae0b1f91a946a5f119748c725b243c646f029427f595c14ac1d0948a213" exitCode=1 Dec 06 04:35:54 crc kubenswrapper[4801]: I1206 04:35:54.338635 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f38b08ba-582a-45d7-a085-ccfa93f1a805","Type":"ContainerDied","Data":"7366fae0b1f91a946a5f119748c725b243c646f029427f595c14ac1d0948a213"} Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.752117 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.794423 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config-secret\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.794559 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ca-certs\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.794699 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-config-data\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.794746 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-workdir\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.794873 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-temporary\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.794935 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.795017 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.795058 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ssh-key\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.795135 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkvrs\" (UniqueName: \"kubernetes.io/projected/f38b08ba-582a-45d7-a085-ccfa93f1a805-kube-api-access-rkvrs\") pod \"f38b08ba-582a-45d7-a085-ccfa93f1a805\" (UID: \"f38b08ba-582a-45d7-a085-ccfa93f1a805\") " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.795532 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.795802 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-config-data" (OuterVolumeSpecName: "config-data") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.795882 4801 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.798510 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.800314 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.804669 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38b08ba-582a-45d7-a085-ccfa93f1a805-kube-api-access-rkvrs" (OuterVolumeSpecName: "kube-api-access-rkvrs") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "kube-api-access-rkvrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.820982 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.829582 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.838241 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.842821 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f38b08ba-582a-45d7-a085-ccfa93f1a805" (UID: "f38b08ba-582a-45d7-a085-ccfa93f1a805"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897736 4801 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f38b08ba-582a-45d7-a085-ccfa93f1a805-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897809 4801 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897825 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897839 4801 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897847 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkvrs\" (UniqueName: \"kubernetes.io/projected/f38b08ba-582a-45d7-a085-ccfa93f1a805-kube-api-access-rkvrs\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897857 4801 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897865 4801 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f38b08ba-582a-45d7-a085-ccfa93f1a805-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.897875 4801 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f38b08ba-582a-45d7-a085-ccfa93f1a805-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:55 crc kubenswrapper[4801]: I1206 04:35:55.920168 4801 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 06 04:35:56 crc kubenswrapper[4801]: I1206 04:35:56.000505 4801 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 06 04:35:56 crc kubenswrapper[4801]: I1206 04:35:56.358816 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"f38b08ba-582a-45d7-a085-ccfa93f1a805","Type":"ContainerDied","Data":"94307e56fb555f20cab31ece35652b57b2f5fe250db0a5e2f2c00168f0380bf4"} Dec 06 04:35:56 crc kubenswrapper[4801]: I1206 04:35:56.359220 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94307e56fb555f20cab31ece35652b57b2f5fe250db0a5e2f2c00168f0380bf4" Dec 06 04:35:56 crc kubenswrapper[4801]: I1206 04:35:56.358907 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 04:35:59 crc kubenswrapper[4801]: I1206 04:35:59.212263 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:35:59 crc kubenswrapper[4801]: E1206 04:35:59.212863 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.869357 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870706 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="extract-content" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870723 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="extract-content" Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870737 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="extract-utilities" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870744 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="extract-utilities" Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870768 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38b08ba-582a-45d7-a085-ccfa93f1a805" containerName="tempest-tests-tempest-tests-runner" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870775 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38b08ba-582a-45d7-a085-ccfa93f1a805" containerName="tempest-tests-tempest-tests-runner" Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870798 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="registry-server" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870805 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="registry-server" Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870834 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="extract-content" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870839 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="extract-content" Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870848 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="extract-utilities" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870855 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="extract-utilities" Dec 06 04:36:03 crc kubenswrapper[4801]: E1206 04:36:03.870872 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="registry-server" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.870879 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="registry-server" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.871080 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="761e5d2e-bc1c-440f-b97e-3734a0f2ac7b" containerName="registry-server" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.871106 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38b08ba-582a-45d7-a085-ccfa93f1a805" containerName="tempest-tests-tempest-tests-runner" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.871123 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a869978a-8276-4723-a7a2-d11e4e0d333d" containerName="registry-server" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.872103 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.876468 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hkth4" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.883608 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.888247 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9h55\" (UniqueName: \"kubernetes.io/projected/f5483367-9823-4939-b2b9-5e519ef4c811-kube-api-access-d9h55\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.888370 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.990098 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.990237 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9h55\" (UniqueName: \"kubernetes.io/projected/f5483367-9823-4939-b2b9-5e519ef4c811-kube-api-access-d9h55\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:03 crc kubenswrapper[4801]: I1206 04:36:03.991151 4801 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:04 crc kubenswrapper[4801]: I1206 04:36:04.012495 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9h55\" (UniqueName: \"kubernetes.io/projected/f5483367-9823-4939-b2b9-5e519ef4c811-kube-api-access-d9h55\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:04 crc kubenswrapper[4801]: I1206 04:36:04.041403 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f5483367-9823-4939-b2b9-5e519ef4c811\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:04 crc kubenswrapper[4801]: I1206 04:36:04.205645 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 04:36:04 crc kubenswrapper[4801]: I1206 04:36:04.895108 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 04:36:05 crc kubenswrapper[4801]: I1206 04:36:05.450936 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f5483367-9823-4939-b2b9-5e519ef4c811","Type":"ContainerStarted","Data":"efff227ccc1f9d6c70587e3c9a5846ffc572708b16ced1fc3e18c4aab7f6384c"} Dec 06 04:36:06 crc kubenswrapper[4801]: I1206 04:36:06.463104 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f5483367-9823-4939-b2b9-5e519ef4c811","Type":"ContainerStarted","Data":"8b3647b04c00763ca3c9ee55f6df94cd1a2fe2bf3a50895cac06f5a67a6ec7c6"} Dec 06 04:36:06 crc kubenswrapper[4801]: I1206 04:36:06.480464 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.342275682 podStartE2EDuration="3.480444358s" podCreationTimestamp="2025-12-06 04:36:03 +0000 UTC" firstStartedPulling="2025-12-06 04:36:04.914023428 +0000 UTC m=+5418.036630990" lastFinishedPulling="2025-12-06 04:36:06.052192094 +0000 UTC m=+5419.174799666" observedRunningTime="2025-12-06 04:36:06.478020022 +0000 UTC m=+5419.600627594" watchObservedRunningTime="2025-12-06 04:36:06.480444358 +0000 UTC m=+5419.603051930" Dec 06 04:36:14 crc kubenswrapper[4801]: I1206 04:36:14.212152 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:36:14 crc kubenswrapper[4801]: I1206 04:36:14.534461 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"e19b012c02320a28dd41f91a76ffc93929b06bdfb9b743f156711beabd5a4453"} Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.842899 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cd686/must-gather-tp6lv"] Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.845224 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.848925 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cd686"/"default-dockercfg-srpnt" Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.848938 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cd686"/"kube-root-ca.crt" Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.849115 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cd686"/"openshift-service-ca.crt" Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.855219 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cd686/must-gather-tp6lv"] Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.952023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hdbf\" (UniqueName: \"kubernetes.io/projected/2a7769b9-27d2-439c-a836-872fabd0076d-kube-api-access-4hdbf\") pod \"must-gather-tp6lv\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:44 crc kubenswrapper[4801]: I1206 04:36:44.952436 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a7769b9-27d2-439c-a836-872fabd0076d-must-gather-output\") pod \"must-gather-tp6lv\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.054631 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hdbf\" (UniqueName: \"kubernetes.io/projected/2a7769b9-27d2-439c-a836-872fabd0076d-kube-api-access-4hdbf\") pod \"must-gather-tp6lv\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.054723 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a7769b9-27d2-439c-a836-872fabd0076d-must-gather-output\") pod \"must-gather-tp6lv\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.055121 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a7769b9-27d2-439c-a836-872fabd0076d-must-gather-output\") pod \"must-gather-tp6lv\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.073104 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hdbf\" (UniqueName: \"kubernetes.io/projected/2a7769b9-27d2-439c-a836-872fabd0076d-kube-api-access-4hdbf\") pod \"must-gather-tp6lv\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.164453 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.631152 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cd686/must-gather-tp6lv"] Dec 06 04:36:45 crc kubenswrapper[4801]: I1206 04:36:45.771190 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/must-gather-tp6lv" event={"ID":"2a7769b9-27d2-439c-a836-872fabd0076d","Type":"ContainerStarted","Data":"39c364ca8676984b36b4737c988a9ba51899e08aab29932386dfaf5d8b66140a"} Dec 06 04:36:50 crc kubenswrapper[4801]: I1206 04:36:50.827429 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/must-gather-tp6lv" event={"ID":"2a7769b9-27d2-439c-a836-872fabd0076d","Type":"ContainerStarted","Data":"d3c8359f8f902bd79e04f01323b4aeb0b513c0d81817baa06b37e8e9f2999a30"} Dec 06 04:36:50 crc kubenswrapper[4801]: I1206 04:36:50.828100 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/must-gather-tp6lv" event={"ID":"2a7769b9-27d2-439c-a836-872fabd0076d","Type":"ContainerStarted","Data":"31b9321e2e84a4f2b20c1135fbe33584ecb4c5b32316dff898a73352e4c84b1b"} Dec 06 04:36:50 crc kubenswrapper[4801]: I1206 04:36:50.847424 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cd686/must-gather-tp6lv" podStartSLOduration=2.882185726 podStartE2EDuration="6.847402723s" podCreationTimestamp="2025-12-06 04:36:44 +0000 UTC" firstStartedPulling="2025-12-06 04:36:45.648502429 +0000 UTC m=+5458.771110001" lastFinishedPulling="2025-12-06 04:36:49.613719426 +0000 UTC m=+5462.736326998" observedRunningTime="2025-12-06 04:36:50.844344871 +0000 UTC m=+5463.966952443" watchObservedRunningTime="2025-12-06 04:36:50.847402723 +0000 UTC m=+5463.970010295" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.010335 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cd686/crc-debug-4dltm"] Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.012041 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.080273 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3251aa33-8dff-4794-aeab-998592eeb007-host\") pod \"crc-debug-4dltm\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.080579 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrkpl\" (UniqueName: \"kubernetes.io/projected/3251aa33-8dff-4794-aeab-998592eeb007-kube-api-access-qrkpl\") pod \"crc-debug-4dltm\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.182892 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrkpl\" (UniqueName: \"kubernetes.io/projected/3251aa33-8dff-4794-aeab-998592eeb007-kube-api-access-qrkpl\") pod \"crc-debug-4dltm\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.183311 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3251aa33-8dff-4794-aeab-998592eeb007-host\") pod \"crc-debug-4dltm\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.183432 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3251aa33-8dff-4794-aeab-998592eeb007-host\") pod \"crc-debug-4dltm\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.205659 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrkpl\" (UniqueName: \"kubernetes.io/projected/3251aa33-8dff-4794-aeab-998592eeb007-kube-api-access-qrkpl\") pod \"crc-debug-4dltm\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.331332 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:36:54 crc kubenswrapper[4801]: I1206 04:36:54.863283 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-4dltm" event={"ID":"3251aa33-8dff-4794-aeab-998592eeb007","Type":"ContainerStarted","Data":"2d7bb7245c65a64750351ddab6d5a32a8ae7638044b04cc4e6747d2363724c5d"} Dec 06 04:37:05 crc kubenswrapper[4801]: I1206 04:37:05.974091 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-4dltm" event={"ID":"3251aa33-8dff-4794-aeab-998592eeb007","Type":"ContainerStarted","Data":"a50bc636323a535958200c3ca508afb78f54fe4d99959880705c80fd2c7d6401"} Dec 06 04:37:05 crc kubenswrapper[4801]: I1206 04:37:05.988374 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cd686/crc-debug-4dltm" podStartSLOduration=2.274932922 podStartE2EDuration="12.988353262s" podCreationTimestamp="2025-12-06 04:36:53 +0000 UTC" firstStartedPulling="2025-12-06 04:36:54.406563884 +0000 UTC m=+5467.529171456" lastFinishedPulling="2025-12-06 04:37:05.119984224 +0000 UTC m=+5478.242591796" observedRunningTime="2025-12-06 04:37:05.986207795 +0000 UTC m=+5479.108815367" watchObservedRunningTime="2025-12-06 04:37:05.988353262 +0000 UTC m=+5479.110960834" Dec 06 04:37:54 crc kubenswrapper[4801]: I1206 04:37:54.401384 4801 generic.go:334] "Generic (PLEG): container finished" podID="3251aa33-8dff-4794-aeab-998592eeb007" containerID="a50bc636323a535958200c3ca508afb78f54fe4d99959880705c80fd2c7d6401" exitCode=0 Dec 06 04:37:54 crc kubenswrapper[4801]: I1206 04:37:54.401475 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-4dltm" event={"ID":"3251aa33-8dff-4794-aeab-998592eeb007","Type":"ContainerDied","Data":"a50bc636323a535958200c3ca508afb78f54fe4d99959880705c80fd2c7d6401"} Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.506441 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.540949 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cd686/crc-debug-4dltm"] Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.548880 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cd686/crc-debug-4dltm"] Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.555304 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3251aa33-8dff-4794-aeab-998592eeb007-host\") pod \"3251aa33-8dff-4794-aeab-998592eeb007\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.555404 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3251aa33-8dff-4794-aeab-998592eeb007-host" (OuterVolumeSpecName: "host") pod "3251aa33-8dff-4794-aeab-998592eeb007" (UID: "3251aa33-8dff-4794-aeab-998592eeb007"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.555595 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrkpl\" (UniqueName: \"kubernetes.io/projected/3251aa33-8dff-4794-aeab-998592eeb007-kube-api-access-qrkpl\") pod \"3251aa33-8dff-4794-aeab-998592eeb007\" (UID: \"3251aa33-8dff-4794-aeab-998592eeb007\") " Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.556229 4801 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3251aa33-8dff-4794-aeab-998592eeb007-host\") on node \"crc\" DevicePath \"\"" Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.562966 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3251aa33-8dff-4794-aeab-998592eeb007-kube-api-access-qrkpl" (OuterVolumeSpecName: "kube-api-access-qrkpl") pod "3251aa33-8dff-4794-aeab-998592eeb007" (UID: "3251aa33-8dff-4794-aeab-998592eeb007"). InnerVolumeSpecName "kube-api-access-qrkpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:37:55 crc kubenswrapper[4801]: I1206 04:37:55.658678 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrkpl\" (UniqueName: \"kubernetes.io/projected/3251aa33-8dff-4794-aeab-998592eeb007-kube-api-access-qrkpl\") on node \"crc\" DevicePath \"\"" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.419097 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d7bb7245c65a64750351ddab6d5a32a8ae7638044b04cc4e6747d2363724c5d" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.419146 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-4dltm" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.690083 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cd686/crc-debug-44ggj"] Dec 06 04:37:56 crc kubenswrapper[4801]: E1206 04:37:56.690535 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3251aa33-8dff-4794-aeab-998592eeb007" containerName="container-00" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.690554 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="3251aa33-8dff-4794-aeab-998592eeb007" containerName="container-00" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.690857 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="3251aa33-8dff-4794-aeab-998592eeb007" containerName="container-00" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.691605 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.781127 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6m8\" (UniqueName: \"kubernetes.io/projected/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-kube-api-access-jk6m8\") pod \"crc-debug-44ggj\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.781641 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-host\") pod \"crc-debug-44ggj\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.883025 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk6m8\" (UniqueName: \"kubernetes.io/projected/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-kube-api-access-jk6m8\") pod \"crc-debug-44ggj\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.883147 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-host\") pod \"crc-debug-44ggj\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.883262 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-host\") pod \"crc-debug-44ggj\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:56 crc kubenswrapper[4801]: I1206 04:37:56.901418 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk6m8\" (UniqueName: \"kubernetes.io/projected/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-kube-api-access-jk6m8\") pod \"crc-debug-44ggj\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:57 crc kubenswrapper[4801]: I1206 04:37:57.006668 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:57 crc kubenswrapper[4801]: I1206 04:37:57.222956 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3251aa33-8dff-4794-aeab-998592eeb007" path="/var/lib/kubelet/pods/3251aa33-8dff-4794-aeab-998592eeb007/volumes" Dec 06 04:37:57 crc kubenswrapper[4801]: I1206 04:37:57.429184 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-44ggj" event={"ID":"6853f144-2397-4dc7-b6a4-bae3ef4ff04e","Type":"ContainerStarted","Data":"90fa09da9d099938d93c3fd520ed10db9a962de47e798a876816b577847bcf43"} Dec 06 04:37:57 crc kubenswrapper[4801]: I1206 04:37:57.429256 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-44ggj" event={"ID":"6853f144-2397-4dc7-b6a4-bae3ef4ff04e","Type":"ContainerStarted","Data":"c6e2c58c6423166c919a812417ff1081719a550a010795736855010792ed8971"} Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.438674 4801 generic.go:334] "Generic (PLEG): container finished" podID="6853f144-2397-4dc7-b6a4-bae3ef4ff04e" containerID="90fa09da9d099938d93c3fd520ed10db9a962de47e798a876816b577847bcf43" exitCode=0 Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.438833 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-44ggj" event={"ID":"6853f144-2397-4dc7-b6a4-bae3ef4ff04e","Type":"ContainerDied","Data":"90fa09da9d099938d93c3fd520ed10db9a962de47e798a876816b577847bcf43"} Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.529449 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.615932 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk6m8\" (UniqueName: \"kubernetes.io/projected/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-kube-api-access-jk6m8\") pod \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.616293 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-host\") pod \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\" (UID: \"6853f144-2397-4dc7-b6a4-bae3ef4ff04e\") " Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.616924 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-host" (OuterVolumeSpecName: "host") pod "6853f144-2397-4dc7-b6a4-bae3ef4ff04e" (UID: "6853f144-2397-4dc7-b6a4-bae3ef4ff04e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.629299 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-kube-api-access-jk6m8" (OuterVolumeSpecName: "kube-api-access-jk6m8") pod "6853f144-2397-4dc7-b6a4-bae3ef4ff04e" (UID: "6853f144-2397-4dc7-b6a4-bae3ef4ff04e"). InnerVolumeSpecName "kube-api-access-jk6m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.718313 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk6m8\" (UniqueName: \"kubernetes.io/projected/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-kube-api-access-jk6m8\") on node \"crc\" DevicePath \"\"" Dec 06 04:37:58 crc kubenswrapper[4801]: I1206 04:37:58.718354 4801 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6853f144-2397-4dc7-b6a4-bae3ef4ff04e-host\") on node \"crc\" DevicePath \"\"" Dec 06 04:37:59 crc kubenswrapper[4801]: I1206 04:37:59.451491 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-44ggj" event={"ID":"6853f144-2397-4dc7-b6a4-bae3ef4ff04e","Type":"ContainerDied","Data":"c6e2c58c6423166c919a812417ff1081719a550a010795736855010792ed8971"} Dec 06 04:37:59 crc kubenswrapper[4801]: I1206 04:37:59.451918 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6e2c58c6423166c919a812417ff1081719a550a010795736855010792ed8971" Dec 06 04:37:59 crc kubenswrapper[4801]: I1206 04:37:59.451584 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-44ggj" Dec 06 04:38:00 crc kubenswrapper[4801]: I1206 04:38:00.286847 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cd686/crc-debug-44ggj"] Dec 06 04:38:00 crc kubenswrapper[4801]: I1206 04:38:00.294936 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cd686/crc-debug-44ggj"] Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.230557 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6853f144-2397-4dc7-b6a4-bae3ef4ff04e" path="/var/lib/kubelet/pods/6853f144-2397-4dc7-b6a4-bae3ef4ff04e/volumes" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.465437 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cd686/crc-debug-2vp7p"] Dec 06 04:38:01 crc kubenswrapper[4801]: E1206 04:38:01.466049 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853f144-2397-4dc7-b6a4-bae3ef4ff04e" containerName="container-00" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.466080 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853f144-2397-4dc7-b6a4-bae3ef4ff04e" containerName="container-00" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.466417 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="6853f144-2397-4dc7-b6a4-bae3ef4ff04e" containerName="container-00" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.468345 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.571731 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49b32237-7791-49ca-ae27-78e75c81a6c9-host\") pod \"crc-debug-2vp7p\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.571985 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zl76\" (UniqueName: \"kubernetes.io/projected/49b32237-7791-49ca-ae27-78e75c81a6c9-kube-api-access-9zl76\") pod \"crc-debug-2vp7p\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.673650 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49b32237-7791-49ca-ae27-78e75c81a6c9-host\") pod \"crc-debug-2vp7p\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.673816 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49b32237-7791-49ca-ae27-78e75c81a6c9-host\") pod \"crc-debug-2vp7p\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.673701 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zl76\" (UniqueName: \"kubernetes.io/projected/49b32237-7791-49ca-ae27-78e75c81a6c9-kube-api-access-9zl76\") pod \"crc-debug-2vp7p\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.700616 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zl76\" (UniqueName: \"kubernetes.io/projected/49b32237-7791-49ca-ae27-78e75c81a6c9-kube-api-access-9zl76\") pod \"crc-debug-2vp7p\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:01 crc kubenswrapper[4801]: I1206 04:38:01.793867 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:02 crc kubenswrapper[4801]: I1206 04:38:02.488163 4801 generic.go:334] "Generic (PLEG): container finished" podID="49b32237-7791-49ca-ae27-78e75c81a6c9" containerID="dc74cd623d31eb91905c3414b1f4e801dd7d5b15a4e5be5a2f58abb5a0905dc9" exitCode=0 Dec 06 04:38:02 crc kubenswrapper[4801]: I1206 04:38:02.488316 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-2vp7p" event={"ID":"49b32237-7791-49ca-ae27-78e75c81a6c9","Type":"ContainerDied","Data":"dc74cd623d31eb91905c3414b1f4e801dd7d5b15a4e5be5a2f58abb5a0905dc9"} Dec 06 04:38:02 crc kubenswrapper[4801]: I1206 04:38:02.488523 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/crc-debug-2vp7p" event={"ID":"49b32237-7791-49ca-ae27-78e75c81a6c9","Type":"ContainerStarted","Data":"a6b78c0016d295d22371d5e1ec75d92f14ad15f4fd98c0a47fe4447551e0cce1"} Dec 06 04:38:02 crc kubenswrapper[4801]: I1206 04:38:02.541572 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cd686/crc-debug-2vp7p"] Dec 06 04:38:02 crc kubenswrapper[4801]: I1206 04:38:02.554457 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cd686/crc-debug-2vp7p"] Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.594084 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.713338 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zl76\" (UniqueName: \"kubernetes.io/projected/49b32237-7791-49ca-ae27-78e75c81a6c9-kube-api-access-9zl76\") pod \"49b32237-7791-49ca-ae27-78e75c81a6c9\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.713431 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49b32237-7791-49ca-ae27-78e75c81a6c9-host\") pod \"49b32237-7791-49ca-ae27-78e75c81a6c9\" (UID: \"49b32237-7791-49ca-ae27-78e75c81a6c9\") " Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.713550 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b32237-7791-49ca-ae27-78e75c81a6c9-host" (OuterVolumeSpecName: "host") pod "49b32237-7791-49ca-ae27-78e75c81a6c9" (UID: "49b32237-7791-49ca-ae27-78e75c81a6c9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.714015 4801 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/49b32237-7791-49ca-ae27-78e75c81a6c9-host\") on node \"crc\" DevicePath \"\"" Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.719879 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b32237-7791-49ca-ae27-78e75c81a6c9-kube-api-access-9zl76" (OuterVolumeSpecName: "kube-api-access-9zl76") pod "49b32237-7791-49ca-ae27-78e75c81a6c9" (UID: "49b32237-7791-49ca-ae27-78e75c81a6c9"). InnerVolumeSpecName "kube-api-access-9zl76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:38:03 crc kubenswrapper[4801]: I1206 04:38:03.815374 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zl76\" (UniqueName: \"kubernetes.io/projected/49b32237-7791-49ca-ae27-78e75c81a6c9-kube-api-access-9zl76\") on node \"crc\" DevicePath \"\"" Dec 06 04:38:04 crc kubenswrapper[4801]: I1206 04:38:04.508639 4801 scope.go:117] "RemoveContainer" containerID="dc74cd623d31eb91905c3414b1f4e801dd7d5b15a4e5be5a2f58abb5a0905dc9" Dec 06 04:38:04 crc kubenswrapper[4801]: I1206 04:38:04.508692 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/crc-debug-2vp7p" Dec 06 04:38:05 crc kubenswrapper[4801]: I1206 04:38:05.223184 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b32237-7791-49ca-ae27-78e75c81a6c9" path="/var/lib/kubelet/pods/49b32237-7791-49ca-ae27-78e75c81a6c9/volumes" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.287170 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vd7w2"] Dec 06 04:38:33 crc kubenswrapper[4801]: E1206 04:38:33.290772 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b32237-7791-49ca-ae27-78e75c81a6c9" containerName="container-00" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.290799 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b32237-7791-49ca-ae27-78e75c81a6c9" containerName="container-00" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.291011 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b32237-7791-49ca-ae27-78e75c81a6c9" containerName="container-00" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.293814 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.314867 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd7w2"] Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.422847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-catalog-content\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.423212 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-utilities\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.423245 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsspf\" (UniqueName: \"kubernetes.io/projected/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-kube-api-access-lsspf\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.524704 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-catalog-content\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.524782 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-utilities\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.524809 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsspf\" (UniqueName: \"kubernetes.io/projected/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-kube-api-access-lsspf\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.525561 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-catalog-content\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.525791 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-utilities\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.559312 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsspf\" (UniqueName: \"kubernetes.io/projected/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-kube-api-access-lsspf\") pod \"redhat-operators-vd7w2\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:33 crc kubenswrapper[4801]: I1206 04:38:33.634951 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:34 crc kubenswrapper[4801]: I1206 04:38:34.099383 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vd7w2"] Dec 06 04:38:34 crc kubenswrapper[4801]: I1206 04:38:34.779619 4801 generic.go:334] "Generic (PLEG): container finished" podID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerID="b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1" exitCode=0 Dec 06 04:38:34 crc kubenswrapper[4801]: I1206 04:38:34.779669 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerDied","Data":"b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1"} Dec 06 04:38:34 crc kubenswrapper[4801]: I1206 04:38:34.779999 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerStarted","Data":"0aece7435aa114f43fa9747af7afa989c8030550aff7ed644e9c61a26372b062"} Dec 06 04:38:35 crc kubenswrapper[4801]: I1206 04:38:35.791332 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerStarted","Data":"cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9"} Dec 06 04:38:37 crc kubenswrapper[4801]: I1206 04:38:37.810919 4801 generic.go:334] "Generic (PLEG): container finished" podID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerID="cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9" exitCode=0 Dec 06 04:38:37 crc kubenswrapper[4801]: I1206 04:38:37.811017 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerDied","Data":"cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9"} Dec 06 04:38:37 crc kubenswrapper[4801]: I1206 04:38:37.903014 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66dd4c5cfd-fvt9d_146f23e1-de81-444d-88cb-a41601ffd36d/barbican-api/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.131510 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-66dd4c5cfd-fvt9d_146f23e1-de81-444d-88cb-a41601ffd36d/barbican-api-log/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.145374 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-664fff78fd-lzlf4_65827ef2-44cd-4f16-82a9-9b746243a301/barbican-keystone-listener/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.311098 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-664fff78fd-lzlf4_65827ef2-44cd-4f16-82a9-9b746243a301/barbican-keystone-listener-log/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.367557 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bc55fb7dc-pm7jf_3aa76c70-d21a-495e-8599-9ca195e8fe53/barbican-worker-log/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.420068 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7bc55fb7dc-pm7jf_3aa76c70-d21a-495e-8599-9ca195e8fe53/barbican-worker/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.639051 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5snf9_55bc4fed-30e6-430f-a4d8-6be830c1f268/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.681489 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69f07f2-fed0-4999-9167-1d3c6d17fccd/ceilometer-central-agent/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.760944 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69f07f2-fed0-4999-9167-1d3c6d17fccd/ceilometer-notification-agent/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.819903 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerStarted","Data":"a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa"} Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.843066 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vd7w2" podStartSLOduration=2.240266144 podStartE2EDuration="5.843043489s" podCreationTimestamp="2025-12-06 04:38:33 +0000 UTC" firstStartedPulling="2025-12-06 04:38:34.781055296 +0000 UTC m=+5567.903662868" lastFinishedPulling="2025-12-06 04:38:38.383832641 +0000 UTC m=+5571.506440213" observedRunningTime="2025-12-06 04:38:38.83712904 +0000 UTC m=+5571.959736612" watchObservedRunningTime="2025-12-06 04:38:38.843043489 +0000 UTC m=+5571.965651061" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.880260 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69f07f2-fed0-4999-9167-1d3c6d17fccd/proxy-httpd/0.log" Dec 06 04:38:38 crc kubenswrapper[4801]: I1206 04:38:38.883136 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69f07f2-fed0-4999-9167-1d3c6d17fccd/sg-core/0.log" Dec 06 04:38:39 crc kubenswrapper[4801]: I1206 04:38:39.013668 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-rb974_5fa3a819-e36d-4ee7-9730-53f9f5eaa1ae/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:39 crc kubenswrapper[4801]: I1206 04:38:39.125623 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-qrqbv_ace31379-943d-48d3-b156-c449eae9325c/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:39 crc kubenswrapper[4801]: I1206 04:38:39.657830 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d6100205-050d-4862-b25a-b4152511de4e/probe/0.log" Dec 06 04:38:39 crc kubenswrapper[4801]: I1206 04:38:39.804442 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_09f92f28-d85e-47ea-a585-a67fb86a540f/cinder-api/0.log" Dec 06 04:38:40 crc kubenswrapper[4801]: I1206 04:38:40.192082 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_09f92f28-d85e-47ea-a585-a67fb86a540f/cinder-api-log/0.log" Dec 06 04:38:40 crc kubenswrapper[4801]: I1206 04:38:40.203742 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241/cinder-scheduler/0.log" Dec 06 04:38:40 crc kubenswrapper[4801]: I1206 04:38:40.484667 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_4ac98ef4-8cfc-440f-b9b6-f1a92ae7e241/probe/0.log" Dec 06 04:38:40 crc kubenswrapper[4801]: I1206 04:38:40.790709 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5a1dce19-9384-4038-9e0a-4cfc3de377a6/probe/0.log" Dec 06 04:38:41 crc kubenswrapper[4801]: I1206 04:38:41.041604 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-rz2c2_6d1eac30-6555-4e4f-a285-0f988967b438/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:41 crc kubenswrapper[4801]: I1206 04:38:41.169282 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:38:41 crc kubenswrapper[4801]: I1206 04:38:41.169344 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:38:41 crc kubenswrapper[4801]: I1206 04:38:41.516543 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-29mst_9795699b-76ac-46ce-a6bf-0898ea8817f1/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:41 crc kubenswrapper[4801]: I1206 04:38:41.906827 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-rxnbr_8653b927-16f7-4400-8965-2ebdd408c0ca/init/0.log" Dec 06 04:38:41 crc kubenswrapper[4801]: I1206 04:38:41.995320 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-rxnbr_8653b927-16f7-4400-8965-2ebdd408c0ca/init/0.log" Dec 06 04:38:42 crc kubenswrapper[4801]: I1206 04:38:42.486328 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54/glance-log/0.log" Dec 06 04:38:42 crc kubenswrapper[4801]: I1206 04:38:42.725423 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_2f0e4d56-eeb9-4002-ae70-3a5b6f10cd54/glance-httpd/0.log" Dec 06 04:38:42 crc kubenswrapper[4801]: I1206 04:38:42.772798 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-rxnbr_8653b927-16f7-4400-8965-2ebdd408c0ca/dnsmasq-dns/0.log" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.001145 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9763c19c-a748-434f-a868-af381202b97e/glance-httpd/0.log" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.041562 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_9763c19c-a748-434f-a868-af381202b97e/glance-log/0.log" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.545419 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7d85575696-vjhxr_3a358806-cf3d-4c1c-853a-ab310d0c7058/horizon/0.log" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.554466 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-grxjr_cbcf1692-907f-4ec9-a315-f39d2696c9f0/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.635957 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.636014 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:43 crc kubenswrapper[4801]: I1206 04:38:43.920977 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7d85575696-vjhxr_3a358806-cf3d-4c1c-853a-ab310d0c7058/horizon-log/0.log" Dec 06 04:38:44 crc kubenswrapper[4801]: I1206 04:38:44.708106 4801 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vd7w2" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="registry-server" probeResult="failure" output=< Dec 06 04:38:44 crc kubenswrapper[4801]: timeout: failed to connect service ":50051" within 1s Dec 06 04:38:44 crc kubenswrapper[4801]: > Dec 06 04:38:44 crc kubenswrapper[4801]: I1206 04:38:44.841341 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416561-8pbqv_0d0b5912-792b-4abb-9d65-2bc033319f4a/keystone-cron/0.log" Dec 06 04:38:44 crc kubenswrapper[4801]: I1206 04:38:44.864611 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-zltss_1fb7037e-6eca-42a5-b146-02594414a08b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:45 crc kubenswrapper[4801]: I1206 04:38:45.064339 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a9b3e048-d2e7-43ef-bac9-cc9536b8c06d/kube-state-metrics/0.log" Dec 06 04:38:45 crc kubenswrapper[4801]: I1206 04:38:45.336170 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-h7khs_899596fb-4d4f-419a-be54-3d236d8af270/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:45 crc kubenswrapper[4801]: I1206 04:38:45.475967 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_d6100205-050d-4862-b25a-b4152511de4e/cinder-backup/0.log" Dec 06 04:38:45 crc kubenswrapper[4801]: I1206 04:38:45.730406 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_07982175-3bb7-4bfa-b4a7-49c3eff288ac/manila-api/0.log" Dec 06 04:38:45 crc kubenswrapper[4801]: I1206 04:38:45.739631 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_07982175-3bb7-4bfa-b4a7-49c3eff288ac/manila-api-log/0.log" Dec 06 04:38:45 crc kubenswrapper[4801]: I1206 04:38:45.865697 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-66f8fdb7b9-xsvqm_09d46a1d-755b-43d4-81f5-3a3be44ea3d4/keystone-api/0.log" Dec 06 04:38:46 crc kubenswrapper[4801]: I1206 04:38:46.023900 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_606f274f-6ae7-4b11-b684-e95831283ee4/probe/0.log" Dec 06 04:38:46 crc kubenswrapper[4801]: I1206 04:38:46.058650 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_606f274f-6ae7-4b11-b684-e95831283ee4/manila-scheduler/0.log" Dec 06 04:38:46 crc kubenswrapper[4801]: I1206 04:38:46.176142 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_05cd9a4f-2b17-46aa-85c4-99ca0e3f8642/manila-share/0.log" Dec 06 04:38:46 crc kubenswrapper[4801]: I1206 04:38:46.215035 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_05cd9a4f-2b17-46aa-85c4-99ca0e3f8642/probe/0.log" Dec 06 04:38:46 crc kubenswrapper[4801]: I1206 04:38:46.911837 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-554d4f888f-vn47n_87b90546-3593-40c2-9be7-84187756b4cf/neutron-httpd/0.log" Dec 06 04:38:47 crc kubenswrapper[4801]: I1206 04:38:47.097080 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-554d4f888f-vn47n_87b90546-3593-40c2-9be7-84187756b4cf/neutron-api/0.log" Dec 06 04:38:47 crc kubenswrapper[4801]: I1206 04:38:47.133148 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-mk5w5_2f85a4d5-23d5-4e42-ba7e-d05f2062ba07/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:47 crc kubenswrapper[4801]: I1206 04:38:47.831208 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_36edc3c9-457a-498f-9938-41f98c8b1491/nova-cell0-conductor-conductor/0.log" Dec 06 04:38:48 crc kubenswrapper[4801]: I1206 04:38:48.113158 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f189c4d3-554b-479b-ba60-7abc6dc13161/nova-api-log/0.log" Dec 06 04:38:48 crc kubenswrapper[4801]: I1206 04:38:48.498790 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f9207b6a-e3f4-4613-97eb-7c4022ca8fa0/nova-cell1-conductor-conductor/0.log" Dec 06 04:38:48 crc kubenswrapper[4801]: I1206 04:38:48.649301 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8bdab0ac-ed8c-443a-aec4-31e4fcc2fc25/nova-cell1-novncproxy-novncproxy/0.log" Dec 06 04:38:48 crc kubenswrapper[4801]: I1206 04:38:48.685333 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f189c4d3-554b-479b-ba60-7abc6dc13161/nova-api-api/0.log" Dec 06 04:38:48 crc kubenswrapper[4801]: I1206 04:38:48.958637 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-xs64c_5651abf8-1969-4df5-a8bf-274fcc9edffe/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:49 crc kubenswrapper[4801]: I1206 04:38:49.087042 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cde994a7-1f23-4b09-8f61-a7f5f3393960/nova-metadata-log/0.log" Dec 06 04:38:49 crc kubenswrapper[4801]: I1206 04:38:49.613507 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_a9b09a1c-654f-42ab-9f77-012033ce6f13/nova-scheduler-scheduler/0.log" Dec 06 04:38:49 crc kubenswrapper[4801]: I1206 04:38:49.714362 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c85c66a1-6bad-499d-8a59-75020d456cd7/mysql-bootstrap/0.log" Dec 06 04:38:49 crc kubenswrapper[4801]: I1206 04:38:49.948029 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c85c66a1-6bad-499d-8a59-75020d456cd7/galera/0.log" Dec 06 04:38:49 crc kubenswrapper[4801]: I1206 04:38:49.956603 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_c85c66a1-6bad-499d-8a59-75020d456cd7/mysql-bootstrap/0.log" Dec 06 04:38:50 crc kubenswrapper[4801]: I1206 04:38:50.126214 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_5a1dce19-9384-4038-9e0a-4cfc3de377a6/cinder-volume/0.log" Dec 06 04:38:50 crc kubenswrapper[4801]: I1206 04:38:50.227227 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_463cb826-89ba-4c9d-b4ae-9453464d3ebc/mysql-bootstrap/0.log" Dec 06 04:38:50 crc kubenswrapper[4801]: I1206 04:38:50.456487 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_463cb826-89ba-4c9d-b4ae-9453464d3ebc/mysql-bootstrap/0.log" Dec 06 04:38:50 crc kubenswrapper[4801]: I1206 04:38:50.491982 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_463cb826-89ba-4c9d-b4ae-9453464d3ebc/galera/0.log" Dec 06 04:38:50 crc kubenswrapper[4801]: I1206 04:38:50.765926 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gv5hg_00cc7364-fab1-449d-9939-020c58f7e9af/openstack-network-exporter/0.log" Dec 06 04:38:50 crc kubenswrapper[4801]: I1206 04:38:50.930199 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2e59b1c6-c154-42d6-8b79-b35b3bf48cf7/openstackclient/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.143604 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-44f28_276fe396-a90f-4c5b-83ce-ac17c7617e63/ovsdb-server-init/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.334523 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-44f28_276fe396-a90f-4c5b-83ce-ac17c7617e63/ovsdb-server-init/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.372044 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-44f28_276fe396-a90f-4c5b-83ce-ac17c7617e63/ovs-vswitchd/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.408728 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-44f28_276fe396-a90f-4c5b-83ce-ac17c7617e63/ovsdb-server/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.631317 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_cde994a7-1f23-4b09-8f61-a7f5f3393960/nova-metadata-metadata/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.640045 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qqlb5_eefe8d7e-f739-42c8-88fb-2c27a8630e8b/ovn-controller/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.829066 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-trz69_0cf6f03c-8888-4937-ad09-cb2b7b6ecbb0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.885957 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b/ovn-northd/0.log" Dec 06 04:38:51 crc kubenswrapper[4801]: I1206 04:38:51.942747 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_62f4ed72-b541-4bcb-9f27-ec5b7ac71a9b/openstack-network-exporter/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.111965 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_dd4e7515-f487-4c9e-b405-a5f61022d5e5/openstack-network-exporter/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.131847 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_dd4e7515-f487-4c9e-b405-a5f61022d5e5/ovsdbserver-nb/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.361766 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4e4cd15-b8c1-4521-82f7-d54fb0141c9b/ovsdbserver-sb/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.416219 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f4e4cd15-b8c1-4521-82f7-d54fb0141c9b/openstack-network-exporter/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.638566 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55dddf74fb-zbzw5_6f0a4a82-0f66-4716-9b11-fc2015676f79/placement-api/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.737323 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_509f393d-bb6a-47e3-a68c-e598c5b37a1b/setup-container/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.774319 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-55dddf74fb-zbzw5_6f0a4a82-0f66-4716-9b11-fc2015676f79/placement-log/0.log" Dec 06 04:38:52 crc kubenswrapper[4801]: I1206 04:38:52.912006 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_509f393d-bb6a-47e3-a68c-e598c5b37a1b/setup-container/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.017684 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_509f393d-bb6a-47e3-a68c-e598c5b37a1b/rabbitmq/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.066559 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_96374aa1-9e52-440e-b058-26ed49f7b0e9/setup-container/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.298479 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_96374aa1-9e52-440e-b058-26ed49f7b0e9/rabbitmq/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.320681 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_96374aa1-9e52-440e-b058-26ed49f7b0e9/setup-container/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.352491 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-8xlqv_8c6a6819-7858-49d3-acc8-5b3cf8660213/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.593699 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pwxrx_40993b38-48ff-41fb-90a7-9c9fc03dd1e3/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.606978 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zwb66_17c94e6a-0e75-45a9-a11d-31eae796de72/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.682725 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.740037 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.866327 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-c9sqg_497b1d32-7e25-419a-9daa-425b6de5889c/ssh-known-hosts-edpm-deployment/0.log" Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.920682 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd7w2"] Dec 06 04:38:53 crc kubenswrapper[4801]: I1206 04:38:53.979268 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_f38b08ba-582a-45d7-a085-ccfa93f1a805/tempest-tests-tempest-tests-runner/0.log" Dec 06 04:38:54 crc kubenswrapper[4801]: I1206 04:38:54.135348 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f5483367-9823-4939-b2b9-5e519ef4c811/test-operator-logs-container/0.log" Dec 06 04:38:54 crc kubenswrapper[4801]: I1206 04:38:54.292144 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-j29xq_0be3f374-f93f-4533-aa01-b56ae87544a9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 04:38:54 crc kubenswrapper[4801]: I1206 04:38:54.990277 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vd7w2" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="registry-server" containerID="cri-o://a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa" gracePeriod=2 Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.567163 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.726485 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-utilities\") pod \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.726730 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-catalog-content\") pod \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.726839 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsspf\" (UniqueName: \"kubernetes.io/projected/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-kube-api-access-lsspf\") pod \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\" (UID: \"a87ebc9a-db5f-4ebe-96e2-5c605980c09f\") " Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.728004 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-utilities" (OuterVolumeSpecName: "utilities") pod "a87ebc9a-db5f-4ebe-96e2-5c605980c09f" (UID: "a87ebc9a-db5f-4ebe-96e2-5c605980c09f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.753024 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-kube-api-access-lsspf" (OuterVolumeSpecName: "kube-api-access-lsspf") pod "a87ebc9a-db5f-4ebe-96e2-5c605980c09f" (UID: "a87ebc9a-db5f-4ebe-96e2-5c605980c09f"). InnerVolumeSpecName "kube-api-access-lsspf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.828565 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsspf\" (UniqueName: \"kubernetes.io/projected/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-kube-api-access-lsspf\") on node \"crc\" DevicePath \"\"" Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.828598 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.862382 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a87ebc9a-db5f-4ebe-96e2-5c605980c09f" (UID: "a87ebc9a-db5f-4ebe-96e2-5c605980c09f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:38:55 crc kubenswrapper[4801]: I1206 04:38:55.930158 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a87ebc9a-db5f-4ebe-96e2-5c605980c09f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.001193 4801 generic.go:334] "Generic (PLEG): container finished" podID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerID="a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa" exitCode=0 Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.001232 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerDied","Data":"a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa"} Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.001259 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vd7w2" event={"ID":"a87ebc9a-db5f-4ebe-96e2-5c605980c09f","Type":"ContainerDied","Data":"0aece7435aa114f43fa9747af7afa989c8030550aff7ed644e9c61a26372b062"} Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.001277 4801 scope.go:117] "RemoveContainer" containerID="a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.001397 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vd7w2" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.036830 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vd7w2"] Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.057280 4801 scope.go:117] "RemoveContainer" containerID="cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.057667 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vd7w2"] Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.124155 4801 scope.go:117] "RemoveContainer" containerID="b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.158003 4801 scope.go:117] "RemoveContainer" containerID="a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa" Dec 06 04:38:56 crc kubenswrapper[4801]: E1206 04:38:56.161711 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa\": container with ID starting with a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa not found: ID does not exist" containerID="a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.161798 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa"} err="failed to get container status \"a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa\": rpc error: code = NotFound desc = could not find container \"a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa\": container with ID starting with a1eb1123d5f7b1652cb0bb9e2018f13ccef6409b884d71546409caf6e54c6caa not found: ID does not exist" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.161836 4801 scope.go:117] "RemoveContainer" containerID="cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9" Dec 06 04:38:56 crc kubenswrapper[4801]: E1206 04:38:56.162359 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9\": container with ID starting with cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9 not found: ID does not exist" containerID="cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.162419 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9"} err="failed to get container status \"cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9\": rpc error: code = NotFound desc = could not find container \"cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9\": container with ID starting with cb520526886ec11201aae71a7d55047ee3f9946958e2ab83b5231d9a73122de9 not found: ID does not exist" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.162458 4801 scope.go:117] "RemoveContainer" containerID="b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1" Dec 06 04:38:56 crc kubenswrapper[4801]: E1206 04:38:56.162854 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1\": container with ID starting with b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1 not found: ID does not exist" containerID="b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1" Dec 06 04:38:56 crc kubenswrapper[4801]: I1206 04:38:56.162893 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1"} err="failed to get container status \"b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1\": rpc error: code = NotFound desc = could not find container \"b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1\": container with ID starting with b506086a7486f59d7ab86a5b920919b93f41365b4ba201b23aa7e414264115b1 not found: ID does not exist" Dec 06 04:38:57 crc kubenswrapper[4801]: I1206 04:38:57.223212 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" path="/var/lib/kubelet/pods/a87ebc9a-db5f-4ebe-96e2-5c605980c09f/volumes" Dec 06 04:39:08 crc kubenswrapper[4801]: I1206 04:39:08.884206 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_225c5f5f-7422-45ff-a2b8-2b9d3b577d79/memcached/0.log" Dec 06 04:39:11 crc kubenswrapper[4801]: I1206 04:39:11.169841 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:39:11 crc kubenswrapper[4801]: I1206 04:39:11.170328 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.340178 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-6tvst_bf89afba-23bf-4d4e-8de6-58be01700897/kube-rbac-proxy/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.396092 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-6tvst_bf89afba-23bf-4d4e-8de6-58be01700897/manager/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.524160 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/util/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.716676 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/pull/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.725687 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/util/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.743462 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/pull/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.910828 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/pull/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.922800 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/util/0.log" Dec 06 04:39:20 crc kubenswrapper[4801]: I1206 04:39:20.933551 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ce07a63be150ba00d9869a51be8d012a4ccc4e29168f2384702976c3c34sc4x_ed3e81c7-6078-42e2-a230-1dcb4b0ce766/extract/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.091311 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-68dd88d65f-bgnqt_075bc058-a6db-435f-b4da-78d269436fc5/kube-rbac-proxy/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.160477 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-68dd88d65f-bgnqt_075bc058-a6db-435f-b4da-78d269436fc5/manager/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.211408 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-2ft2k_6a1623fb-e41b-4fb4-ad84-a9d95a642210/kube-rbac-proxy/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.328147 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-2ft2k_6a1623fb-e41b-4fb4-ad84-a9d95a642210/manager/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.403608 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-kfdvh_c9d1b6fe-6fbe-42b4-b6d3-88864f542000/kube-rbac-proxy/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.474763 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-kfdvh_c9d1b6fe-6fbe-42b4-b6d3-88864f542000/manager/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.597284 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-b659z_83987163-dcd4-42d5-98fb-155bc07daf26/kube-rbac-proxy/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.640476 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-b659z_83987163-dcd4-42d5-98fb-155bc07daf26/manager/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.796472 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-zsdh6_936dc55c-43bb-4e3d-8970-0811d582232a/kube-rbac-proxy/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.798784 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-zsdh6_936dc55c-43bb-4e3d-8970-0811d582232a/manager/0.log" Dec 06 04:39:21 crc kubenswrapper[4801]: I1206 04:39:21.922559 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-tb9mp_cd4c204b-eb70-4ed7-8800-9c0aa8df0894/kube-rbac-proxy/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.170745 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-9gv2p_5e8887af-c61f-4cb5-83ae-c0a62adfb3b2/kube-rbac-proxy/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.195863 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-9gv2p_5e8887af-c61f-4cb5-83ae-c0a62adfb3b2/manager/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.308938 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-tb9mp_cd4c204b-eb70-4ed7-8800-9c0aa8df0894/manager/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.455356 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-d2q7s_528abee5-1816-4693-8c8d-ec8addacf287/kube-rbac-proxy/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.664244 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-d2q7s_528abee5-1816-4693-8c8d-ec8addacf287/manager/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.836700 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-7qljd_adf5388c-f2b1-4cce-9616-03c9ecde87e8/manager/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.836961 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-7qljd_adf5388c-f2b1-4cce-9616-03c9ecde87e8/kube-rbac-proxy/0.log" Dec 06 04:39:22 crc kubenswrapper[4801]: I1206 04:39:22.986350 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-xm85p_67793857-efbd-4ac4-8c3d-0f5f508ae3ee/kube-rbac-proxy/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.124856 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-xm85p_67793857-efbd-4ac4-8c3d-0f5f508ae3ee/manager/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.184931 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-95rj5_01a93f61-bdef-4ff2-9f14-357a4737f0fa/kube-rbac-proxy/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.306894 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-95rj5_01a93f61-bdef-4ff2-9f14-357a4737f0fa/manager/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.373676 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5m2qc_7cd29fbc-0b7b-4619-97b5-febfdd86a6e2/kube-rbac-proxy/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.457711 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5m2qc_7cd29fbc-0b7b-4619-97b5-febfdd86a6e2/manager/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.588367 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-fcq4r_c84bc554-95d1-4cb3-889e-e3eb348d5b37/manager/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.616732 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-fcq4r_c84bc554-95d1-4cb3-889e-e3eb348d5b37/kube-rbac-proxy/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.783925 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd44nznw_16612e3e-2588-413f-b0ff-0a97864485ca/kube-rbac-proxy/0.log" Dec 06 04:39:23 crc kubenswrapper[4801]: I1206 04:39:23.808603 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd44nznw_16612e3e-2588-413f-b0ff-0a97864485ca/manager/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.199238 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xvjcr_73bc5fd9-16cd-4af0-aa93-6230d268eaf6/registry-server/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.213964 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6f9c47f684-gjctz_23d978c7-b8fb-4796-b41e-4805344aa517/operator/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.420672 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jnb5d_a55050d3-bc38-44be-b873-79b80850217e/kube-rbac-proxy/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.581622 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-jnb5d_a55050d3-bc38-44be-b873-79b80850217e/manager/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.639910 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-747r8_92a97017-cb01-43fd-ac39-38f0b0f40e44/kube-rbac-proxy/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.754498 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-747r8_92a97017-cb01-43fd-ac39-38f0b0f40e44/manager/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.880128 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-9dn6g_4cf67ce7-dfd4-46b5-98e6-7c6ff303793b/operator/0.log" Dec 06 04:39:24 crc kubenswrapper[4801]: I1206 04:39:24.985201 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-knvll_1ab19549-8876-40f6-82eb-c29be8d76122/kube-rbac-proxy/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.053130 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-knvll_1ab19549-8876-40f6-82eb-c29be8d76122/manager/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.134548 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-wbqp2_b755167b-08be-4bcf-bde8-5918264dc691/kube-rbac-proxy/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.343269 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-wbqp2_b755167b-08be-4bcf-bde8-5918264dc691/manager/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.360455 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vq4xk_894496e6-3155-4f57-98a1-98a51a1f0f30/kube-rbac-proxy/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.385778 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vq4xk_894496e6-3155-4f57-98a1-98a51a1f0f30/manager/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.426296 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-69f8949d4-nwx88_d189f8dd-9d7d-40b5-806b-566da68bf67c/manager/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.524952 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-cg5wj_7793e32c-9749-4b8f-a643-666cfa0783a8/kube-rbac-proxy/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.564864 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-cg5wj_7793e32c-9749-4b8f-a643-666cfa0783a8/manager/0.log" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.760780 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zp4tp"] Dec 06 04:39:25 crc kubenswrapper[4801]: E1206 04:39:25.761305 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="extract-content" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.761324 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="extract-content" Dec 06 04:39:25 crc kubenswrapper[4801]: E1206 04:39:25.761340 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="extract-utilities" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.761348 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="extract-utilities" Dec 06 04:39:25 crc kubenswrapper[4801]: E1206 04:39:25.761387 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="registry-server" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.761395 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="registry-server" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.761676 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="a87ebc9a-db5f-4ebe-96e2-5c605980c09f" containerName="registry-server" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.763460 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.771145 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp4tp"] Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.846826 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-utilities\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.847477 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-catalog-content\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.847659 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdbb\" (UniqueName: \"kubernetes.io/projected/5afa340d-5b67-4e87-9dff-bed9dc546c71-kube-api-access-8xdbb\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.949798 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-utilities\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.949871 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-catalog-content\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.949929 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdbb\" (UniqueName: \"kubernetes.io/projected/5afa340d-5b67-4e87-9dff-bed9dc546c71-kube-api-access-8xdbb\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.950447 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-utilities\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.950594 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-catalog-content\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:25 crc kubenswrapper[4801]: I1206 04:39:25.971364 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdbb\" (UniqueName: \"kubernetes.io/projected/5afa340d-5b67-4e87-9dff-bed9dc546c71-kube-api-access-8xdbb\") pod \"redhat-marketplace-zp4tp\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:26 crc kubenswrapper[4801]: I1206 04:39:26.093416 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:26 crc kubenswrapper[4801]: I1206 04:39:26.638499 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp4tp"] Dec 06 04:39:26 crc kubenswrapper[4801]: W1206 04:39:26.644955 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5afa340d_5b67_4e87_9dff_bed9dc546c71.slice/crio-9c64a4fff2633804423ce489c5251e3ce0a72cb8501b0111e04e46275b623db1 WatchSource:0}: Error finding container 9c64a4fff2633804423ce489c5251e3ce0a72cb8501b0111e04e46275b623db1: Status 404 returned error can't find the container with id 9c64a4fff2633804423ce489c5251e3ce0a72cb8501b0111e04e46275b623db1 Dec 06 04:39:27 crc kubenswrapper[4801]: I1206 04:39:27.343443 4801 generic.go:334] "Generic (PLEG): container finished" podID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerID="7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f" exitCode=0 Dec 06 04:39:27 crc kubenswrapper[4801]: I1206 04:39:27.343516 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerDied","Data":"7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f"} Dec 06 04:39:27 crc kubenswrapper[4801]: I1206 04:39:27.344211 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerStarted","Data":"9c64a4fff2633804423ce489c5251e3ce0a72cb8501b0111e04e46275b623db1"} Dec 06 04:39:27 crc kubenswrapper[4801]: I1206 04:39:27.345443 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 04:39:28 crc kubenswrapper[4801]: I1206 04:39:28.359599 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerStarted","Data":"a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03"} Dec 06 04:39:29 crc kubenswrapper[4801]: I1206 04:39:29.369961 4801 generic.go:334] "Generic (PLEG): container finished" podID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerID="a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03" exitCode=0 Dec 06 04:39:29 crc kubenswrapper[4801]: I1206 04:39:29.370061 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerDied","Data":"a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03"} Dec 06 04:39:30 crc kubenswrapper[4801]: I1206 04:39:30.380556 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerStarted","Data":"6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3"} Dec 06 04:39:30 crc kubenswrapper[4801]: I1206 04:39:30.400356 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zp4tp" podStartSLOduration=2.7625825710000003 podStartE2EDuration="5.400330395s" podCreationTimestamp="2025-12-06 04:39:25 +0000 UTC" firstStartedPulling="2025-12-06 04:39:27.34521953 +0000 UTC m=+5620.467827092" lastFinishedPulling="2025-12-06 04:39:29.982967344 +0000 UTC m=+5623.105574916" observedRunningTime="2025-12-06 04:39:30.397421977 +0000 UTC m=+5623.520029559" watchObservedRunningTime="2025-12-06 04:39:30.400330395 +0000 UTC m=+5623.522937967" Dec 06 04:39:36 crc kubenswrapper[4801]: I1206 04:39:36.094178 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:36 crc kubenswrapper[4801]: I1206 04:39:36.094728 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:36 crc kubenswrapper[4801]: I1206 04:39:36.144530 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:36 crc kubenswrapper[4801]: I1206 04:39:36.488502 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:36 crc kubenswrapper[4801]: I1206 04:39:36.531905 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp4tp"] Dec 06 04:39:38 crc kubenswrapper[4801]: I1206 04:39:38.457385 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zp4tp" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="registry-server" containerID="cri-o://6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3" gracePeriod=2 Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.457806 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.468809 4801 generic.go:334] "Generic (PLEG): container finished" podID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerID="6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3" exitCode=0 Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.468854 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerDied","Data":"6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3"} Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.468882 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zp4tp" event={"ID":"5afa340d-5b67-4e87-9dff-bed9dc546c71","Type":"ContainerDied","Data":"9c64a4fff2633804423ce489c5251e3ce0a72cb8501b0111e04e46275b623db1"} Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.468901 4801 scope.go:117] "RemoveContainer" containerID="6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.469017 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zp4tp" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.541878 4801 scope.go:117] "RemoveContainer" containerID="a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.569566 4801 scope.go:117] "RemoveContainer" containerID="7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.615905 4801 scope.go:117] "RemoveContainer" containerID="6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3" Dec 06 04:39:39 crc kubenswrapper[4801]: E1206 04:39:39.616539 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3\": container with ID starting with 6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3 not found: ID does not exist" containerID="6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.616802 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3"} err="failed to get container status \"6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3\": rpc error: code = NotFound desc = could not find container \"6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3\": container with ID starting with 6808a05c5d484b68fb4caa1217fa8d65b7b530bebc6d47d5771dd53555bd39f3 not found: ID does not exist" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.617011 4801 scope.go:117] "RemoveContainer" containerID="a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03" Dec 06 04:39:39 crc kubenswrapper[4801]: E1206 04:39:39.617738 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03\": container with ID starting with a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03 not found: ID does not exist" containerID="a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.617972 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03"} err="failed to get container status \"a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03\": rpc error: code = NotFound desc = could not find container \"a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03\": container with ID starting with a7a4bedb2c62b3edb55dc6b069dc0050bcfece9a56a922392ce2168eeb6b3b03 not found: ID does not exist" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.618143 4801 scope.go:117] "RemoveContainer" containerID="7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f" Dec 06 04:39:39 crc kubenswrapper[4801]: E1206 04:39:39.618883 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f\": container with ID starting with 7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f not found: ID does not exist" containerID="7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.619149 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f"} err="failed to get container status \"7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f\": rpc error: code = NotFound desc = could not find container \"7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f\": container with ID starting with 7b6789468dc99f443494bd2e666990115c263a00b84770f66eef104477b8b87f not found: ID does not exist" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.626902 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-catalog-content\") pod \"5afa340d-5b67-4e87-9dff-bed9dc546c71\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.629215 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xdbb\" (UniqueName: \"kubernetes.io/projected/5afa340d-5b67-4e87-9dff-bed9dc546c71-kube-api-access-8xdbb\") pod \"5afa340d-5b67-4e87-9dff-bed9dc546c71\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.630180 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-utilities\") pod \"5afa340d-5b67-4e87-9dff-bed9dc546c71\" (UID: \"5afa340d-5b67-4e87-9dff-bed9dc546c71\") " Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.631113 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-utilities" (OuterVolumeSpecName: "utilities") pod "5afa340d-5b67-4e87-9dff-bed9dc546c71" (UID: "5afa340d-5b67-4e87-9dff-bed9dc546c71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.632250 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.636600 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5afa340d-5b67-4e87-9dff-bed9dc546c71-kube-api-access-8xdbb" (OuterVolumeSpecName: "kube-api-access-8xdbb") pod "5afa340d-5b67-4e87-9dff-bed9dc546c71" (UID: "5afa340d-5b67-4e87-9dff-bed9dc546c71"). InnerVolumeSpecName "kube-api-access-8xdbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.660885 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5afa340d-5b67-4e87-9dff-bed9dc546c71" (UID: "5afa340d-5b67-4e87-9dff-bed9dc546c71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.735104 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xdbb\" (UniqueName: \"kubernetes.io/projected/5afa340d-5b67-4e87-9dff-bed9dc546c71-kube-api-access-8xdbb\") on node \"crc\" DevicePath \"\"" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.735528 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5afa340d-5b67-4e87-9dff-bed9dc546c71-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.809972 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp4tp"] Dec 06 04:39:39 crc kubenswrapper[4801]: I1206 04:39:39.825303 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zp4tp"] Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.169661 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.170067 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.170116 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.170855 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e19b012c02320a28dd41f91a76ffc93929b06bdfb9b743f156711beabd5a4453"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.170909 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://e19b012c02320a28dd41f91a76ffc93929b06bdfb9b743f156711beabd5a4453" gracePeriod=600 Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.224850 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" path="/var/lib/kubelet/pods/5afa340d-5b67-4e87-9dff-bed9dc546c71/volumes" Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.495837 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="e19b012c02320a28dd41f91a76ffc93929b06bdfb9b743f156711beabd5a4453" exitCode=0 Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.496102 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"e19b012c02320a28dd41f91a76ffc93929b06bdfb9b743f156711beabd5a4453"} Dec 06 04:39:41 crc kubenswrapper[4801]: I1206 04:39:41.496976 4801 scope.go:117] "RemoveContainer" containerID="ef9540e9716abf1d7a1010c7f50da61665950b55a8d96927c0198ec4b60e7164" Dec 06 04:39:42 crc kubenswrapper[4801]: I1206 04:39:42.507702 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerStarted","Data":"4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df"} Dec 06 04:39:44 crc kubenswrapper[4801]: I1206 04:39:44.959460 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-xlgtc_4c483458-0e51-4a45-86bc-df13cc609b9d/control-plane-machine-set-operator/0.log" Dec 06 04:39:45 crc kubenswrapper[4801]: I1206 04:39:45.143498 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zsvsf_d5e2010c-d755-4f50-b5de-799ab1c30e5a/kube-rbac-proxy/0.log" Dec 06 04:39:45 crc kubenswrapper[4801]: I1206 04:39:45.144425 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-zsvsf_d5e2010c-d755-4f50-b5de-799ab1c30e5a/machine-api-operator/0.log" Dec 06 04:39:56 crc kubenswrapper[4801]: I1206 04:39:56.971739 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d5sqp_d493aca4-f8ca-4a5d-8f13-2776e232fb01/cert-manager-controller/0.log" Dec 06 04:39:57 crc kubenswrapper[4801]: I1206 04:39:57.165453 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-464bh_01b87268-3f8a-4d05-84bb-2c19e182dfb3/cert-manager-cainjector/0.log" Dec 06 04:39:57 crc kubenswrapper[4801]: I1206 04:39:57.173983 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l8cc5_0ff9519f-3a68-4d4f-9a7b-b09282b3db2b/cert-manager-webhook/0.log" Dec 06 04:40:08 crc kubenswrapper[4801]: I1206 04:40:08.508622 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-26q4j_ea8f41f3-7470-43ce-bbc5-e6cd9783dcd5/nmstate-console-plugin/0.log" Dec 06 04:40:08 crc kubenswrapper[4801]: I1206 04:40:08.716281 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-g7gfq_d6214601-7874-4ac4-bb5f-1743be25951e/kube-rbac-proxy/0.log" Dec 06 04:40:08 crc kubenswrapper[4801]: I1206 04:40:08.736841 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4j999_4c393f31-868e-4b98-af1a-9dd74f31888c/nmstate-handler/0.log" Dec 06 04:40:08 crc kubenswrapper[4801]: I1206 04:40:08.759540 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-g7gfq_d6214601-7874-4ac4-bb5f-1743be25951e/nmstate-metrics/0.log" Dec 06 04:40:08 crc kubenswrapper[4801]: I1206 04:40:08.937384 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-5989q_a1550633-c2b8-4a89-a028-b6960c2f3bf9/nmstate-operator/0.log" Dec 06 04:40:08 crc kubenswrapper[4801]: I1206 04:40:08.972405 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-dt4ch_5c07b12c-9dad-4c3e-a31a-bf2d8c0c8243/nmstate-webhook/0.log" Dec 06 04:40:21 crc kubenswrapper[4801]: I1206 04:40:21.966369 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cdtfj_22b5e566-52a6-48f6-9104-f61cf4dfdfce/kube-rbac-proxy/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.000331 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-cdtfj_22b5e566-52a6-48f6-9104-f61cf4dfdfce/controller/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.117793 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-frr-files/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.292742 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-frr-files/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.293528 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-reloader/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.305534 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-metrics/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.321944 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-reloader/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.456969 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-frr-files/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.465629 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-metrics/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.467893 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-reloader/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.482984 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-metrics/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.678917 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-frr-files/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.685511 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-reloader/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.698796 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/cp-metrics/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.717031 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/controller/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.864486 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/frr-metrics/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.885376 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/kube-rbac-proxy/0.log" Dec 06 04:40:22 crc kubenswrapper[4801]: I1206 04:40:22.930169 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/kube-rbac-proxy-frr/0.log" Dec 06 04:40:23 crc kubenswrapper[4801]: I1206 04:40:23.103696 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/reloader/0.log" Dec 06 04:40:23 crc kubenswrapper[4801]: I1206 04:40:23.186359 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-lpggk_243b0e89-177c-4d78-a335-b8184f7f9cd3/frr-k8s-webhook-server/0.log" Dec 06 04:40:23 crc kubenswrapper[4801]: I1206 04:40:23.306771 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-744bd9595-tpzsg_0db979e7-3ec7-4dea-a942-9311355d7dca/manager/0.log" Dec 06 04:40:23 crc kubenswrapper[4801]: I1206 04:40:23.534261 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-86ccb96b46-7tj8d_4c6e4e25-33b0-47b5-826d-ba09dc42e398/webhook-server/0.log" Dec 06 04:40:23 crc kubenswrapper[4801]: I1206 04:40:23.620456 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7f8ct_47d040e7-00fd-42d1-a652-2a9ef2eb383e/kube-rbac-proxy/0.log" Dec 06 04:40:24 crc kubenswrapper[4801]: I1206 04:40:24.294033 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7f8ct_47d040e7-00fd-42d1-a652-2a9ef2eb383e/speaker/0.log" Dec 06 04:40:24 crc kubenswrapper[4801]: I1206 04:40:24.645666 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-fs6tf_fe7e5882-0bbb-477d-889d-0c6ba99ea883/frr/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.128157 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/util/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.312496 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/pull/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.351097 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/pull/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.366387 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/util/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.563528 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/extract/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.579458 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/pull/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.592516 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f222dj_ca37372c-a05f-4ff7-ace6-3f33ad23b959/util/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.748774 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/util/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.921118 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/pull/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.945134 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/pull/0.log" Dec 06 04:40:36 crc kubenswrapper[4801]: I1206 04:40:36.949816 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/util/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.131103 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/pull/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.143952 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/extract/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.145499 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83sr2m7_719f8bb0-bfb5-4867-9d5c-5dd56bd64bab/util/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.317306 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/extract-utilities/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.465178 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/extract-content/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.473352 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/extract-utilities/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.483686 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/extract-content/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.655422 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/extract-content/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.714022 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/extract-utilities/0.log" Dec 06 04:40:37 crc kubenswrapper[4801]: I1206 04:40:37.889017 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/extract-utilities/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.080628 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/extract-utilities/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.145927 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/extract-content/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.207116 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/extract-content/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.318977 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pv4mt_5608f948-89f4-4888-92bd-1559fc521d5e/registry-server/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.367855 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/extract-utilities/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.420035 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/extract-content/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.635683 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rpzsr_f81f28d8-ea1d-4089-bb9c-cbe684ad3044/marketplace-operator/0.log" Dec 06 04:40:38 crc kubenswrapper[4801]: I1206 04:40:38.791030 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/extract-utilities/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.018307 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/extract-utilities/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.034704 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/extract-content/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.061551 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/extract-content/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.194566 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/extract-content/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.208308 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/extract-utilities/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.415277 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/extract-utilities/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.600390 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s78gq_094f9cf9-987e-4ee8-84a8-19f016fe78f2/registry-server/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.660060 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/extract-utilities/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.695189 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/extract-content/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.701375 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qvppv_2ca01f64-dbe9-4738-a4be-835a46b80389/registry-server/0.log" Dec 06 04:40:39 crc kubenswrapper[4801]: I1206 04:40:39.798114 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/extract-content/0.log" Dec 06 04:40:40 crc kubenswrapper[4801]: I1206 04:40:40.012628 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/extract-utilities/0.log" Dec 06 04:40:40 crc kubenswrapper[4801]: I1206 04:40:40.025899 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/extract-content/0.log" Dec 06 04:40:40 crc kubenswrapper[4801]: I1206 04:40:40.674736 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-6w9gj_d4fa5d86-6a1b-4a0b-811d-e33f2e0d9720/registry-server/0.log" Dec 06 04:41:41 crc kubenswrapper[4801]: I1206 04:41:41.170196 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:41:41 crc kubenswrapper[4801]: I1206 04:41:41.170722 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:42:11 crc kubenswrapper[4801]: I1206 04:42:11.169366 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:42:11 crc kubenswrapper[4801]: I1206 04:42:11.169944 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:42:41 crc kubenswrapper[4801]: I1206 04:42:41.170175 4801 patch_prober.go:28] interesting pod/machine-config-daemon-mjmtt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 04:42:41 crc kubenswrapper[4801]: I1206 04:42:41.170724 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 04:42:41 crc kubenswrapper[4801]: I1206 04:42:41.170802 4801 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" Dec 06 04:42:41 crc kubenswrapper[4801]: I1206 04:42:41.171559 4801 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df"} pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 04:42:41 crc kubenswrapper[4801]: I1206 04:42:41.171622 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerName="machine-config-daemon" containerID="cri-o://4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" gracePeriod=600 Dec 06 04:42:41 crc kubenswrapper[4801]: E1206 04:42:41.292226 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.181777 4801 generic.go:334] "Generic (PLEG): container finished" podID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" exitCode=0 Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.181801 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" event={"ID":"54a0ee06-a8e7-4d96-844f-d0dd3c90e900","Type":"ContainerDied","Data":"4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df"} Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.182291 4801 scope.go:117] "RemoveContainer" containerID="e19b012c02320a28dd41f91a76ffc93929b06bdfb9b743f156711beabd5a4453" Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.184620 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.185670 4801 generic.go:334] "Generic (PLEG): container finished" podID="2a7769b9-27d2-439c-a836-872fabd0076d" containerID="31b9321e2e84a4f2b20c1135fbe33584ecb4c5b32316dff898a73352e4c84b1b" exitCode=0 Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.185728 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cd686/must-gather-tp6lv" event={"ID":"2a7769b9-27d2-439c-a836-872fabd0076d","Type":"ContainerDied","Data":"31b9321e2e84a4f2b20c1135fbe33584ecb4c5b32316dff898a73352e4c84b1b"} Dec 06 04:42:42 crc kubenswrapper[4801]: E1206 04:42:42.186030 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.186570 4801 scope.go:117] "RemoveContainer" containerID="31b9321e2e84a4f2b20c1135fbe33584ecb4c5b32316dff898a73352e4c84b1b" Dec 06 04:42:42 crc kubenswrapper[4801]: I1206 04:42:42.778989 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cd686_must-gather-tp6lv_2a7769b9-27d2-439c-a836-872fabd0076d/gather/0.log" Dec 06 04:42:50 crc kubenswrapper[4801]: I1206 04:42:50.712802 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cd686/must-gather-tp6lv"] Dec 06 04:42:50 crc kubenswrapper[4801]: I1206 04:42:50.713537 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cd686/must-gather-tp6lv" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="copy" containerID="cri-o://d3c8359f8f902bd79e04f01323b4aeb0b513c0d81817baa06b37e8e9f2999a30" gracePeriod=2 Dec 06 04:42:50 crc kubenswrapper[4801]: I1206 04:42:50.723813 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cd686/must-gather-tp6lv"] Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.266830 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cd686_must-gather-tp6lv_2a7769b9-27d2-439c-a836-872fabd0076d/copy/0.log" Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.267613 4801 generic.go:334] "Generic (PLEG): container finished" podID="2a7769b9-27d2-439c-a836-872fabd0076d" containerID="d3c8359f8f902bd79e04f01323b4aeb0b513c0d81817baa06b37e8e9f2999a30" exitCode=143 Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.690111 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cd686_must-gather-tp6lv_2a7769b9-27d2-439c-a836-872fabd0076d/copy/0.log" Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.690875 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.824073 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hdbf\" (UniqueName: \"kubernetes.io/projected/2a7769b9-27d2-439c-a836-872fabd0076d-kube-api-access-4hdbf\") pod \"2a7769b9-27d2-439c-a836-872fabd0076d\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.824182 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a7769b9-27d2-439c-a836-872fabd0076d-must-gather-output\") pod \"2a7769b9-27d2-439c-a836-872fabd0076d\" (UID: \"2a7769b9-27d2-439c-a836-872fabd0076d\") " Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.830968 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a7769b9-27d2-439c-a836-872fabd0076d-kube-api-access-4hdbf" (OuterVolumeSpecName: "kube-api-access-4hdbf") pod "2a7769b9-27d2-439c-a836-872fabd0076d" (UID: "2a7769b9-27d2-439c-a836-872fabd0076d"). InnerVolumeSpecName "kube-api-access-4hdbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.926715 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hdbf\" (UniqueName: \"kubernetes.io/projected/2a7769b9-27d2-439c-a836-872fabd0076d-kube-api-access-4hdbf\") on node \"crc\" DevicePath \"\"" Dec 06 04:42:51 crc kubenswrapper[4801]: I1206 04:42:51.989686 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a7769b9-27d2-439c-a836-872fabd0076d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "2a7769b9-27d2-439c-a836-872fabd0076d" (UID: "2a7769b9-27d2-439c-a836-872fabd0076d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:42:52 crc kubenswrapper[4801]: I1206 04:42:52.028298 4801 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2a7769b9-27d2-439c-a836-872fabd0076d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 06 04:42:52 crc kubenswrapper[4801]: I1206 04:42:52.279744 4801 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cd686_must-gather-tp6lv_2a7769b9-27d2-439c-a836-872fabd0076d/copy/0.log" Dec 06 04:42:52 crc kubenswrapper[4801]: I1206 04:42:52.280621 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cd686/must-gather-tp6lv" Dec 06 04:42:52 crc kubenswrapper[4801]: I1206 04:42:52.281916 4801 scope.go:117] "RemoveContainer" containerID="d3c8359f8f902bd79e04f01323b4aeb0b513c0d81817baa06b37e8e9f2999a30" Dec 06 04:42:52 crc kubenswrapper[4801]: I1206 04:42:52.301121 4801 scope.go:117] "RemoveContainer" containerID="31b9321e2e84a4f2b20c1135fbe33584ecb4c5b32316dff898a73352e4c84b1b" Dec 06 04:42:53 crc kubenswrapper[4801]: I1206 04:42:53.237176 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" path="/var/lib/kubelet/pods/2a7769b9-27d2-439c-a836-872fabd0076d/volumes" Dec 06 04:42:56 crc kubenswrapper[4801]: I1206 04:42:56.213665 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:42:56 crc kubenswrapper[4801]: E1206 04:42:56.214497 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:43:08 crc kubenswrapper[4801]: I1206 04:43:08.213877 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:43:08 crc kubenswrapper[4801]: E1206 04:43:08.214931 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:43:21 crc kubenswrapper[4801]: I1206 04:43:21.212907 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:43:21 crc kubenswrapper[4801]: E1206 04:43:21.213604 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:43:23 crc kubenswrapper[4801]: I1206 04:43:23.201362 4801 scope.go:117] "RemoveContainer" containerID="a50bc636323a535958200c3ca508afb78f54fe4d99959880705c80fd2c7d6401" Dec 06 04:43:36 crc kubenswrapper[4801]: I1206 04:43:36.212947 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:43:36 crc kubenswrapper[4801]: E1206 04:43:36.213573 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:43:51 crc kubenswrapper[4801]: I1206 04:43:51.212962 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:43:51 crc kubenswrapper[4801]: E1206 04:43:51.213768 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:44:02 crc kubenswrapper[4801]: I1206 04:44:02.212222 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:44:02 crc kubenswrapper[4801]: E1206 04:44:02.213152 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:44:14 crc kubenswrapper[4801]: I1206 04:44:14.212885 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:44:14 crc kubenswrapper[4801]: E1206 04:44:14.213648 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:44:23 crc kubenswrapper[4801]: I1206 04:44:23.286538 4801 scope.go:117] "RemoveContainer" containerID="90fa09da9d099938d93c3fd520ed10db9a962de47e798a876816b577847bcf43" Dec 06 04:44:27 crc kubenswrapper[4801]: I1206 04:44:27.441951 4801 patch_prober.go:28] interesting pod/route-controller-manager-64f8858df9-hbqzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 04:44:27 crc kubenswrapper[4801]: I1206 04:44:27.442708 4801 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-64f8858df9-hbqzz" podUID="7834c771-55d0-4da3-9d45-a48e01403463" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 04:44:29 crc kubenswrapper[4801]: I1206 04:44:29.213179 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:44:29 crc kubenswrapper[4801]: E1206 04:44:29.213578 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:44:31 crc kubenswrapper[4801]: I1206 04:44:31.738026 4801 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="e69f07f2-fed0-4999-9167-1d3c6d17fccd" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 06 04:44:43 crc kubenswrapper[4801]: I1206 04:44:43.212549 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:44:43 crc kubenswrapper[4801]: E1206 04:44:43.213339 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.154975 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lsqzz"] Dec 06 04:44:47 crc kubenswrapper[4801]: E1206 04:44:47.155696 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="copy" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.155712 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="copy" Dec 06 04:44:47 crc kubenswrapper[4801]: E1206 04:44:47.155732 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="extract-content" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.155740 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="extract-content" Dec 06 04:44:47 crc kubenswrapper[4801]: E1206 04:44:47.155771 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="gather" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.155780 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="gather" Dec 06 04:44:47 crc kubenswrapper[4801]: E1206 04:44:47.155805 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="registry-server" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.155835 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="registry-server" Dec 06 04:44:47 crc kubenswrapper[4801]: E1206 04:44:47.155862 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="extract-utilities" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.155872 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="extract-utilities" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.156083 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="gather" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.156100 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a7769b9-27d2-439c-a836-872fabd0076d" containerName="copy" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.156125 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="5afa340d-5b67-4e87-9dff-bed9dc546c71" containerName="registry-server" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.157808 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.168833 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsqzz"] Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.174847 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mphl9\" (UniqueName: \"kubernetes.io/projected/6a85b424-9001-4b4e-b7a7-d081a73bc457-kube-api-access-mphl9\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.175023 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-utilities\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.175112 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-catalog-content\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.277576 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-utilities\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.278121 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-utilities\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.278380 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-catalog-content\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.278554 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mphl9\" (UniqueName: \"kubernetes.io/projected/6a85b424-9001-4b4e-b7a7-d081a73bc457-kube-api-access-mphl9\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.278795 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-catalog-content\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.309875 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mphl9\" (UniqueName: \"kubernetes.io/projected/6a85b424-9001-4b4e-b7a7-d081a73bc457-kube-api-access-mphl9\") pod \"community-operators-lsqzz\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:47 crc kubenswrapper[4801]: I1206 04:44:47.491092 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:48 crc kubenswrapper[4801]: I1206 04:44:48.039745 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsqzz"] Dec 06 04:44:48 crc kubenswrapper[4801]: I1206 04:44:48.337367 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerStarted","Data":"4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041"} Dec 06 04:44:48 crc kubenswrapper[4801]: I1206 04:44:48.337857 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerStarted","Data":"8a46293e31fb08e1b795db7b0639aa218f576832d5cd3cdb59be0b8fc6298ce8"} Dec 06 04:44:49 crc kubenswrapper[4801]: I1206 04:44:49.345550 4801 generic.go:334] "Generic (PLEG): container finished" podID="6a85b424-9001-4b4e-b7a7-d081a73bc457" containerID="4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041" exitCode=0 Dec 06 04:44:49 crc kubenswrapper[4801]: I1206 04:44:49.345602 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerDied","Data":"4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041"} Dec 06 04:44:49 crc kubenswrapper[4801]: I1206 04:44:49.347904 4801 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 04:44:51 crc kubenswrapper[4801]: I1206 04:44:51.372462 4801 generic.go:334] "Generic (PLEG): container finished" podID="6a85b424-9001-4b4e-b7a7-d081a73bc457" containerID="f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b" exitCode=0 Dec 06 04:44:51 crc kubenswrapper[4801]: I1206 04:44:51.372709 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerDied","Data":"f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b"} Dec 06 04:44:55 crc kubenswrapper[4801]: I1206 04:44:55.213577 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:44:55 crc kubenswrapper[4801]: E1206 04:44:55.214405 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:44:55 crc kubenswrapper[4801]: I1206 04:44:55.407018 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerStarted","Data":"4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf"} Dec 06 04:44:55 crc kubenswrapper[4801]: I1206 04:44:55.429592 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lsqzz" podStartSLOduration=3.380513187 podStartE2EDuration="8.429570354s" podCreationTimestamp="2025-12-06 04:44:47 +0000 UTC" firstStartedPulling="2025-12-06 04:44:49.34766998 +0000 UTC m=+5942.470277552" lastFinishedPulling="2025-12-06 04:44:54.396727147 +0000 UTC m=+5947.519334719" observedRunningTime="2025-12-06 04:44:55.421828875 +0000 UTC m=+5948.544436467" watchObservedRunningTime="2025-12-06 04:44:55.429570354 +0000 UTC m=+5948.552177926" Dec 06 04:44:57 crc kubenswrapper[4801]: I1206 04:44:57.494927 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:57 crc kubenswrapper[4801]: I1206 04:44:57.495262 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:44:57 crc kubenswrapper[4801]: I1206 04:44:57.538345 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.138808 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc"] Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.140719 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.142536 4801 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.149163 4801 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.149828 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc"] Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.167376 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16bac0dc-6a32-4f8f-9020-63505b5c58a4-secret-volume\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.167921 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16bac0dc-6a32-4f8f-9020-63505b5c58a4-config-volume\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.168095 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bxd5\" (UniqueName: \"kubernetes.io/projected/16bac0dc-6a32-4f8f-9020-63505b5c58a4-kube-api-access-2bxd5\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.269303 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16bac0dc-6a32-4f8f-9020-63505b5c58a4-secret-volume\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.270359 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16bac0dc-6a32-4f8f-9020-63505b5c58a4-config-volume\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.270562 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bxd5\" (UniqueName: \"kubernetes.io/projected/16bac0dc-6a32-4f8f-9020-63505b5c58a4-kube-api-access-2bxd5\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.271826 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16bac0dc-6a32-4f8f-9020-63505b5c58a4-config-volume\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.280584 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16bac0dc-6a32-4f8f-9020-63505b5c58a4-secret-volume\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.295683 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bxd5\" (UniqueName: \"kubernetes.io/projected/16bac0dc-6a32-4f8f-9020-63505b5c58a4-kube-api-access-2bxd5\") pod \"collect-profiles-29416605-pmzlc\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.458601 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:00 crc kubenswrapper[4801]: I1206 04:45:00.941223 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc"] Dec 06 04:45:01 crc kubenswrapper[4801]: I1206 04:45:01.455168 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" event={"ID":"16bac0dc-6a32-4f8f-9020-63505b5c58a4","Type":"ContainerStarted","Data":"baded12320081d2e5c5905f959f89afe4f9fc3f45aa4356c141aa6bf2a22135d"} Dec 06 04:45:02 crc kubenswrapper[4801]: I1206 04:45:02.465001 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" event={"ID":"16bac0dc-6a32-4f8f-9020-63505b5c58a4","Type":"ContainerStarted","Data":"fd3d513973dd2cd045f4f9b78d1f52df3561222eab3afc180e1f2323f1cc1e9a"} Dec 06 04:45:02 crc kubenswrapper[4801]: I1206 04:45:02.483378 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" podStartSLOduration=2.483358601 podStartE2EDuration="2.483358601s" podCreationTimestamp="2025-12-06 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 04:45:02.476747542 +0000 UTC m=+5955.599355114" watchObservedRunningTime="2025-12-06 04:45:02.483358601 +0000 UTC m=+5955.605966173" Dec 06 04:45:03 crc kubenswrapper[4801]: I1206 04:45:03.475671 4801 generic.go:334] "Generic (PLEG): container finished" podID="16bac0dc-6a32-4f8f-9020-63505b5c58a4" containerID="fd3d513973dd2cd045f4f9b78d1f52df3561222eab3afc180e1f2323f1cc1e9a" exitCode=0 Dec 06 04:45:03 crc kubenswrapper[4801]: I1206 04:45:03.476681 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" event={"ID":"16bac0dc-6a32-4f8f-9020-63505b5c58a4","Type":"ContainerDied","Data":"fd3d513973dd2cd045f4f9b78d1f52df3561222eab3afc180e1f2323f1cc1e9a"} Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.818737 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.855566 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bxd5\" (UniqueName: \"kubernetes.io/projected/16bac0dc-6a32-4f8f-9020-63505b5c58a4-kube-api-access-2bxd5\") pod \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.855711 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16bac0dc-6a32-4f8f-9020-63505b5c58a4-secret-volume\") pod \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.855808 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16bac0dc-6a32-4f8f-9020-63505b5c58a4-config-volume\") pod \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\" (UID: \"16bac0dc-6a32-4f8f-9020-63505b5c58a4\") " Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.856466 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16bac0dc-6a32-4f8f-9020-63505b5c58a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "16bac0dc-6a32-4f8f-9020-63505b5c58a4" (UID: "16bac0dc-6a32-4f8f-9020-63505b5c58a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.861994 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bac0dc-6a32-4f8f-9020-63505b5c58a4-kube-api-access-2bxd5" (OuterVolumeSpecName: "kube-api-access-2bxd5") pod "16bac0dc-6a32-4f8f-9020-63505b5c58a4" (UID: "16bac0dc-6a32-4f8f-9020-63505b5c58a4"). InnerVolumeSpecName "kube-api-access-2bxd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.862324 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bac0dc-6a32-4f8f-9020-63505b5c58a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "16bac0dc-6a32-4f8f-9020-63505b5c58a4" (UID: "16bac0dc-6a32-4f8f-9020-63505b5c58a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.957773 4801 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16bac0dc-6a32-4f8f-9020-63505b5c58a4-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.957806 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bxd5\" (UniqueName: \"kubernetes.io/projected/16bac0dc-6a32-4f8f-9020-63505b5c58a4-kube-api-access-2bxd5\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:04 crc kubenswrapper[4801]: I1206 04:45:04.957820 4801 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/16bac0dc-6a32-4f8f-9020-63505b5c58a4-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:05 crc kubenswrapper[4801]: I1206 04:45:05.493917 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" event={"ID":"16bac0dc-6a32-4f8f-9020-63505b5c58a4","Type":"ContainerDied","Data":"baded12320081d2e5c5905f959f89afe4f9fc3f45aa4356c141aa6bf2a22135d"} Dec 06 04:45:05 crc kubenswrapper[4801]: I1206 04:45:05.493960 4801 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baded12320081d2e5c5905f959f89afe4f9fc3f45aa4356c141aa6bf2a22135d" Dec 06 04:45:05 crc kubenswrapper[4801]: I1206 04:45:05.493990 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416605-pmzlc" Dec 06 04:45:05 crc kubenswrapper[4801]: I1206 04:45:05.554874 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn"] Dec 06 04:45:05 crc kubenswrapper[4801]: I1206 04:45:05.562752 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416560-mdjsn"] Dec 06 04:45:06 crc kubenswrapper[4801]: I1206 04:45:06.213019 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:45:06 crc kubenswrapper[4801]: E1206 04:45:06.213726 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.108252 4801 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4qcxv"] Dec 06 04:45:07 crc kubenswrapper[4801]: E1206 04:45:07.108932 4801 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16bac0dc-6a32-4f8f-9020-63505b5c58a4" containerName="collect-profiles" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.108956 4801 state_mem.go:107] "Deleted CPUSet assignment" podUID="16bac0dc-6a32-4f8f-9020-63505b5c58a4" containerName="collect-profiles" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.109225 4801 memory_manager.go:354] "RemoveStaleState removing state" podUID="16bac0dc-6a32-4f8f-9020-63505b5c58a4" containerName="collect-profiles" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.113457 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.123332 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qcxv"] Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.225016 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2792cd45-43ee-4546-b8c4-027d4cbe5e29" path="/var/lib/kubelet/pods/2792cd45-43ee-4546-b8c4-027d4cbe5e29/volumes" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.312898 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9vqt\" (UniqueName: \"kubernetes.io/projected/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-kube-api-access-d9vqt\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.312980 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-catalog-content\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.313252 4801 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-utilities\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.415684 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9vqt\" (UniqueName: \"kubernetes.io/projected/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-kube-api-access-d9vqt\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.415773 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-catalog-content\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.415855 4801 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-utilities\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.416701 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-catalog-content\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.416876 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-utilities\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.439644 4801 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9vqt\" (UniqueName: \"kubernetes.io/projected/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-kube-api-access-d9vqt\") pod \"certified-operators-4qcxv\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.551501 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:45:07 crc kubenswrapper[4801]: I1206 04:45:07.737591 4801 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:08 crc kubenswrapper[4801]: I1206 04:45:08.183197 4801 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qcxv"] Dec 06 04:45:08 crc kubenswrapper[4801]: W1206 04:45:08.184533 4801 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d51577e_26e2_4fc3_a21a_cf8349ea00ef.slice/crio-8178dce56c265e2da209dc64d7f99241a3f573751cf48eee3fbb9238abf38792 WatchSource:0}: Error finding container 8178dce56c265e2da209dc64d7f99241a3f573751cf48eee3fbb9238abf38792: Status 404 returned error can't find the container with id 8178dce56c265e2da209dc64d7f99241a3f573751cf48eee3fbb9238abf38792 Dec 06 04:45:08 crc kubenswrapper[4801]: I1206 04:45:08.524804 4801 generic.go:334] "Generic (PLEG): container finished" podID="3d51577e-26e2-4fc3-a21a-cf8349ea00ef" containerID="672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373" exitCode=0 Dec 06 04:45:08 crc kubenswrapper[4801]: I1206 04:45:08.524870 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerDied","Data":"672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373"} Dec 06 04:45:08 crc kubenswrapper[4801]: I1206 04:45:08.525148 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerStarted","Data":"8178dce56c265e2da209dc64d7f99241a3f573751cf48eee3fbb9238abf38792"} Dec 06 04:45:09 crc kubenswrapper[4801]: I1206 04:45:09.558301 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerStarted","Data":"28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e"} Dec 06 04:45:09 crc kubenswrapper[4801]: I1206 04:45:09.883721 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lsqzz"] Dec 06 04:45:09 crc kubenswrapper[4801]: I1206 04:45:09.884046 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lsqzz" podUID="6a85b424-9001-4b4e-b7a7-d081a73bc457" containerName="registry-server" containerID="cri-o://4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf" gracePeriod=2 Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.359268 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.477552 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-utilities\") pod \"6a85b424-9001-4b4e-b7a7-d081a73bc457\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.477818 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-catalog-content\") pod \"6a85b424-9001-4b4e-b7a7-d081a73bc457\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.477875 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mphl9\" (UniqueName: \"kubernetes.io/projected/6a85b424-9001-4b4e-b7a7-d081a73bc457-kube-api-access-mphl9\") pod \"6a85b424-9001-4b4e-b7a7-d081a73bc457\" (UID: \"6a85b424-9001-4b4e-b7a7-d081a73bc457\") " Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.480355 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-utilities" (OuterVolumeSpecName: "utilities") pod "6a85b424-9001-4b4e-b7a7-d081a73bc457" (UID: "6a85b424-9001-4b4e-b7a7-d081a73bc457"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.488950 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a85b424-9001-4b4e-b7a7-d081a73bc457-kube-api-access-mphl9" (OuterVolumeSpecName: "kube-api-access-mphl9") pod "6a85b424-9001-4b4e-b7a7-d081a73bc457" (UID: "6a85b424-9001-4b4e-b7a7-d081a73bc457"). InnerVolumeSpecName "kube-api-access-mphl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.528397 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a85b424-9001-4b4e-b7a7-d081a73bc457" (UID: "6a85b424-9001-4b4e-b7a7-d081a73bc457"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.567781 4801 generic.go:334] "Generic (PLEG): container finished" podID="6a85b424-9001-4b4e-b7a7-d081a73bc457" containerID="4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf" exitCode=0 Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.567842 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqzz" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.567862 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerDied","Data":"4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf"} Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.567895 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqzz" event={"ID":"6a85b424-9001-4b4e-b7a7-d081a73bc457","Type":"ContainerDied","Data":"8a46293e31fb08e1b795db7b0639aa218f576832d5cd3cdb59be0b8fc6298ce8"} Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.567916 4801 scope.go:117] "RemoveContainer" containerID="4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.571716 4801 generic.go:334] "Generic (PLEG): container finished" podID="3d51577e-26e2-4fc3-a21a-cf8349ea00ef" containerID="28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e" exitCode=0 Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.571808 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerDied","Data":"28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e"} Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.579989 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.580186 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mphl9\" (UniqueName: \"kubernetes.io/projected/6a85b424-9001-4b4e-b7a7-d081a73bc457-kube-api-access-mphl9\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.580271 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a85b424-9001-4b4e-b7a7-d081a73bc457-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.596375 4801 scope.go:117] "RemoveContainer" containerID="f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.621837 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lsqzz"] Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.627202 4801 scope.go:117] "RemoveContainer" containerID="4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.629306 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lsqzz"] Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.663817 4801 scope.go:117] "RemoveContainer" containerID="4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf" Dec 06 04:45:10 crc kubenswrapper[4801]: E1206 04:45:10.664414 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf\": container with ID starting with 4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf not found: ID does not exist" containerID="4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.664469 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf"} err="failed to get container status \"4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf\": rpc error: code = NotFound desc = could not find container \"4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf\": container with ID starting with 4efb2f2205577de0c28b325b48db77fee05f6102429b3fe7cf4c34cbbc123acf not found: ID does not exist" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.664527 4801 scope.go:117] "RemoveContainer" containerID="f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b" Dec 06 04:45:10 crc kubenswrapper[4801]: E1206 04:45:10.664944 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b\": container with ID starting with f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b not found: ID does not exist" containerID="f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.664981 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b"} err="failed to get container status \"f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b\": rpc error: code = NotFound desc = could not find container \"f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b\": container with ID starting with f31a1786cacc81039a7652e6d5fef4306c9a9920b4b55b875e2f667cc28a7c3b not found: ID does not exist" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.665003 4801 scope.go:117] "RemoveContainer" containerID="4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041" Dec 06 04:45:10 crc kubenswrapper[4801]: E1206 04:45:10.665351 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041\": container with ID starting with 4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041 not found: ID does not exist" containerID="4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041" Dec 06 04:45:10 crc kubenswrapper[4801]: I1206 04:45:10.665384 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041"} err="failed to get container status \"4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041\": rpc error: code = NotFound desc = could not find container \"4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041\": container with ID starting with 4367605e38e1c52219682c896133c9447f671641d93373fbb76325ef68dcc041 not found: ID does not exist" Dec 06 04:45:11 crc kubenswrapper[4801]: I1206 04:45:11.225702 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a85b424-9001-4b4e-b7a7-d081a73bc457" path="/var/lib/kubelet/pods/6a85b424-9001-4b4e-b7a7-d081a73bc457/volumes" Dec 06 04:45:11 crc kubenswrapper[4801]: I1206 04:45:11.590449 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerStarted","Data":"0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863"} Dec 06 04:45:17 crc kubenswrapper[4801]: I1206 04:45:17.738583 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:17 crc kubenswrapper[4801]: I1206 04:45:17.739130 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:17 crc kubenswrapper[4801]: I1206 04:45:17.783461 4801 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:17 crc kubenswrapper[4801]: I1206 04:45:17.799569 4801 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4qcxv" podStartSLOduration=8.335245396 podStartE2EDuration="10.799549027s" podCreationTimestamp="2025-12-06 04:45:07 +0000 UTC" firstStartedPulling="2025-12-06 04:45:08.526529561 +0000 UTC m=+5961.649137133" lastFinishedPulling="2025-12-06 04:45:10.990833182 +0000 UTC m=+5964.113440764" observedRunningTime="2025-12-06 04:45:11.619283193 +0000 UTC m=+5964.741890765" watchObservedRunningTime="2025-12-06 04:45:17.799549027 +0000 UTC m=+5970.922156599" Dec 06 04:45:18 crc kubenswrapper[4801]: I1206 04:45:18.212344 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:45:18 crc kubenswrapper[4801]: E1206 04:45:18.212635 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:45:18 crc kubenswrapper[4801]: I1206 04:45:18.692599 4801 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:18 crc kubenswrapper[4801]: I1206 04:45:18.736927 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qcxv"] Dec 06 04:45:20 crc kubenswrapper[4801]: I1206 04:45:20.663397 4801 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4qcxv" podUID="3d51577e-26e2-4fc3-a21a-cf8349ea00ef" containerName="registry-server" containerID="cri-o://0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863" gracePeriod=2 Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.620655 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.674519 4801 generic.go:334] "Generic (PLEG): container finished" podID="3d51577e-26e2-4fc3-a21a-cf8349ea00ef" containerID="0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863" exitCode=0 Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.674774 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerDied","Data":"0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863"} Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.674807 4801 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qcxv" event={"ID":"3d51577e-26e2-4fc3-a21a-cf8349ea00ef","Type":"ContainerDied","Data":"8178dce56c265e2da209dc64d7f99241a3f573751cf48eee3fbb9238abf38792"} Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.674832 4801 scope.go:117] "RemoveContainer" containerID="0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.674836 4801 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qcxv" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.688093 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9vqt\" (UniqueName: \"kubernetes.io/projected/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-kube-api-access-d9vqt\") pod \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.688172 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-utilities\") pod \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.688270 4801 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-catalog-content\") pod \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\" (UID: \"3d51577e-26e2-4fc3-a21a-cf8349ea00ef\") " Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.689579 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-utilities" (OuterVolumeSpecName: "utilities") pod "3d51577e-26e2-4fc3-a21a-cf8349ea00ef" (UID: "3d51577e-26e2-4fc3-a21a-cf8349ea00ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.696985 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-kube-api-access-d9vqt" (OuterVolumeSpecName: "kube-api-access-d9vqt") pod "3d51577e-26e2-4fc3-a21a-cf8349ea00ef" (UID: "3d51577e-26e2-4fc3-a21a-cf8349ea00ef"). InnerVolumeSpecName "kube-api-access-d9vqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.699555 4801 scope.go:117] "RemoveContainer" containerID="28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.742088 4801 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d51577e-26e2-4fc3-a21a-cf8349ea00ef" (UID: "3d51577e-26e2-4fc3-a21a-cf8349ea00ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.755606 4801 scope.go:117] "RemoveContainer" containerID="672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.789798 4801 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.789827 4801 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9vqt\" (UniqueName: \"kubernetes.io/projected/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-kube-api-access-d9vqt\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.789858 4801 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d51577e-26e2-4fc3-a21a-cf8349ea00ef-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.802772 4801 scope.go:117] "RemoveContainer" containerID="0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863" Dec 06 04:45:21 crc kubenswrapper[4801]: E1206 04:45:21.803263 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863\": container with ID starting with 0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863 not found: ID does not exist" containerID="0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.803323 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863"} err="failed to get container status \"0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863\": rpc error: code = NotFound desc = could not find container \"0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863\": container with ID starting with 0a86656637d5612fda8c510376257363d70d1ea10adf745e9dfdbf4563ca9863 not found: ID does not exist" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.803348 4801 scope.go:117] "RemoveContainer" containerID="28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e" Dec 06 04:45:21 crc kubenswrapper[4801]: E1206 04:45:21.803869 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e\": container with ID starting with 28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e not found: ID does not exist" containerID="28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.803901 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e"} err="failed to get container status \"28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e\": rpc error: code = NotFound desc = could not find container \"28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e\": container with ID starting with 28c5eabf6f4714881b614b7b7e3e4ed00d33e48998ac8c0de3ca356f88c34e5e not found: ID does not exist" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.803924 4801 scope.go:117] "RemoveContainer" containerID="672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373" Dec 06 04:45:21 crc kubenswrapper[4801]: E1206 04:45:21.804278 4801 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373\": container with ID starting with 672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373 not found: ID does not exist" containerID="672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373" Dec 06 04:45:21 crc kubenswrapper[4801]: I1206 04:45:21.804323 4801 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373"} err="failed to get container status \"672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373\": rpc error: code = NotFound desc = could not find container \"672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373\": container with ID starting with 672496e7cbb54f767b00ed4ef030b0761d3b78834390c5fed6c7ae16374b6373 not found: ID does not exist" Dec 06 04:45:22 crc kubenswrapper[4801]: I1206 04:45:22.004815 4801 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qcxv"] Dec 06 04:45:22 crc kubenswrapper[4801]: I1206 04:45:22.013930 4801 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4qcxv"] Dec 06 04:45:23 crc kubenswrapper[4801]: I1206 04:45:23.225192 4801 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d51577e-26e2-4fc3-a21a-cf8349ea00ef" path="/var/lib/kubelet/pods/3d51577e-26e2-4fc3-a21a-cf8349ea00ef/volumes" Dec 06 04:45:23 crc kubenswrapper[4801]: I1206 04:45:23.347812 4801 scope.go:117] "RemoveContainer" containerID="79d416fa2d2546ad2c3223462fe64a4bad9fc7d7f093a9a745e579a227e0e3f7" Dec 06 04:45:33 crc kubenswrapper[4801]: I1206 04:45:33.213227 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:45:33 crc kubenswrapper[4801]: E1206 04:45:33.214261 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:45:47 crc kubenswrapper[4801]: I1206 04:45:47.217908 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:45:47 crc kubenswrapper[4801]: E1206 04:45:47.218841 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:45:59 crc kubenswrapper[4801]: I1206 04:45:59.212890 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:45:59 crc kubenswrapper[4801]: E1206 04:45:59.213646 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:46:14 crc kubenswrapper[4801]: I1206 04:46:14.213399 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:46:14 crc kubenswrapper[4801]: E1206 04:46:14.214630 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:46:27 crc kubenswrapper[4801]: I1206 04:46:27.221659 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:46:27 crc kubenswrapper[4801]: E1206 04:46:27.222729 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:46:38 crc kubenswrapper[4801]: I1206 04:46:38.212704 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:46:38 crc kubenswrapper[4801]: E1206 04:46:38.213570 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:46:50 crc kubenswrapper[4801]: I1206 04:46:50.212981 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:46:50 crc kubenswrapper[4801]: E1206 04:46:50.214128 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:47:03 crc kubenswrapper[4801]: I1206 04:47:03.212854 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:47:03 crc kubenswrapper[4801]: E1206 04:47:03.213705 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:47:14 crc kubenswrapper[4801]: I1206 04:47:14.221175 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:47:14 crc kubenswrapper[4801]: E1206 04:47:14.223173 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:47:27 crc kubenswrapper[4801]: I1206 04:47:27.218887 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:47:27 crc kubenswrapper[4801]: E1206 04:47:27.220017 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:47:40 crc kubenswrapper[4801]: I1206 04:47:40.212949 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df" Dec 06 04:47:40 crc kubenswrapper[4801]: E1206 04:47:40.213642 4801 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mjmtt_openshift-machine-config-operator(54a0ee06-a8e7-4d96-844f-d0dd3c90e900)\"" pod="openshift-machine-config-operator/machine-config-daemon-mjmtt" podUID="54a0ee06-a8e7-4d96-844f-d0dd3c90e900" Dec 06 04:47:55 crc kubenswrapper[4801]: I1206 04:47:55.212785 4801 scope.go:117] "RemoveContainer" containerID="4c46ec292ea0dc64309058a9568638df4c33b22fbeecf3045971bafe3354c1df"